EXPMON (exploit monitor) is a cybersecurity service that analyzes files and URLs for exploit detection, we use our own sandboxing and static analysis techniques to check if a file or url is malicious (EXPMON does not analyze executable files and we don’t detect malware).

The core idea (and why we created the project) of EXPMON is that it runs the sandboxes based on a concept we call "environment-binding". This is because exploits run very differently than malware, for exploits, they may only be detected when run within a specific environment. For example, a .pdf file could be a malicious exploit on Foxit Reader but it may not be an exploit on Adobe Reader, thus, if we put the .pdf file into an Adobe Reader sandbox we will miss the exploit. Sometimes an exploit may targets a specific version of a software and acts normally in other versions. All the facts tell us that the correct environment setup is the key for exploit detection.

However, the challenge here is that there're just so many "environments" in the real world (OS, applications, OS/software versions, configurations, etc), we simply can't analyze a file or url in all possible environment setups, let alone some exploits may have scripts run first to detect the environment. Considering this, we've split the EXPMON service into two versions.

There is really no intentional feature cutting in the public vs. the private service, all the features we have to pull off for the public version are just for “opsec” considerations. For example, we have to pull off the url detection feature for the public version just because it will not be effective because everyone shares the single public IP address when visiting a url, the adversary could simply block our public address to avoid detection. In the private version because everyone’s environment is different, it allows us to do more.

The public version is the primary version we're going to discuss in the next FAQ section. Currently, it's offered as a (free) API service only, as we're a bit lacking in web design skills so no website interface is provided for now. :)

EXPMON was initially created and developed by experienced vulnerability researcher Haifei Li, with help from various friends. We hope the service could bring some values to the infosec community on fighting against the ever-evolving exploit-related threats.

Frequently Asked Questions

What are the file types you could process and what are the mentioned “chosen environments”?

At present, EXPMON Public accepts the following file types.

1. doc, docx, docm, dotm, dotx, mht, rtf (Office Word file types)

Those file types will be tested with the following 3 environments:

[Explanation] "win7sp1(original)" means the OS is Windows 7 sp1 and there's no update installed, "win7sp1(updated)" means there're some updates installed. "office2010(14.0.7208.5000)" means the tested software is Microsoft Office 2010 and the exact version is 14.0.7208.5000. "[word2010]" means the run application used to test against the sample is Microsoft Office Word 2010. A "modern" environment running Office 2019 on Windows 10 (64bit) is also provided, in order to catch potential zero-day attacks.

2. xls, xlsx, xlsm, xltm, xlsb, xlam (Office Excel file types)

Those file types will be tested with the following 3 environments:

3. ppt, pps, pptx, ppsx, pptm, potx, ppsm, ppam (Office PowerPoint file types)

Those file types will be tested with the following 3 environments:

[Explanation] all PowerPoint files will be tested with "powerpointshow" - a special run of PowerPoint, to maximize the opportunity of exploit detection.

4. msg, eml (Outlook email file)

Those file types will be tested with the following 3 environments, respectively:


5. pdf (Adobe Portable Document Format)

This file type will be tested with the following 4 environments:

We chose 4 environments for pdf files in order to maximize the opportunity of detecting PDF exploits. As you could see, a Foxit Reader version is chosen, so, basically, our system not only will cover Adobe Reader PDF exploits but also Foxit Reader PDF exploits.

Are the "chosen environments" fixed forever?

No. The chosen environments, as well as the whole system (including the classification rule sets, the static analysis modules, and more) for EXPMON Public could be changed or updated from time to time, in order to reflect the real-world exploit threat landscape. Anyone could access the following url to check all the status details for the current system.

If you have suggestions regarding new file types to process, or new environments to add, you're always very welcomed to drop us an email at

Is your system capable of detecting unknown/zero-day exploits?

Yes. In fact, detecting unknown and zero-day exploits is our primary focus in this project. We want our system to be a valuable addition to the community but not just “reinventing the wheel”.

The zero-day exploits are basically no different than other "historical exploits" as viewed by our system - just they are reproduced in a newer or the latest environment. As you can see, we carefully chose some newer environments. If the exploit is reproduced in the newer environment, a detection description containing "zero-day" will be provided, and you should pay special attention on the sample.

What's your method to classify malicious samples?

We maintain a set of "rules" to classify objects we analyzed based on all the information (including both the dynamic and static analysis data) we collected throughout the system.

We barely use third-party tools for classification; So far, we use the tool "mprator" (part of the oletools) for static detecting Office macros. If we use any third-party tool for classification, the third-party tool will be credited in our "detection description" output.

We regularly do Big Data Analytics trying to improve our core rule-sets. We've practiced this process and found it's very effective to improve our rules for exploit detection.

Therefore, it means that our classifications could be changed from time to time. For example, if a sample is not detected as Malicious now, it doesn’t mean it couldn’t be detected as Malicious in future, thus, all detection responses via our API will contain the timestamp and the system version that detects the object.

Okay, I'm interested in trying it, how to get started?

Currently, the public version will be offered as an API service only, we may develop a "fancy" website later.:) If you're interested, simply email us at for a free API key.

We ask you to read our Terms of Services and Privacy Policy prior to submitting samples to us, in order to understand the legal boundaries of our service.

What are the input and the output look like for your API?

The input is simple; you provide a file, and its filename (optional).

The filename is optional but recommended to be provided as the same and original filename seen in the wild. For example, you receive an email attachment with the filename "test.mht", please submit the file with filename "test.mht". Submitting the original filename (actually the extension name is the key here) will help our system identify the file type more accurately which will lead to better and faster detections.

The output is a bit complicated to explain. For the exact technical details of the API input/output please visit our Github page

Basically, the output comes with the detection (4 levels, CLEAN/INFORMATIONAL/SUSPICIOUS/MALICIOUS), the detection description (if it's detected for something). In addition to the detection information, there're the objects. For every object, it contains the hash, the object type, and the "envlogs". For every "envlog", we have the names of the environments, and for every environment, you will receive various logs recorded in this environment, for now, we provide the "FileAccess_Read", "FileAccess_Write", "RegAccess_Read", and "RegAccess_Write" logs.

For example, for a single sample, it may generate 2 objects, for each object, it may be put into 4 environments, and for every environment, it records 4 types of logs. Therefore, for this sample you will receive 2 * 4 * 4 = 32 recorded logs.

The “environment-binding” sandboxing analysis logs are actually highly valuable if you want to perform your own analysis, or even, you may write your own rule based on the logs to detect any suspicious behaviors.

I just submitted one sample to you, but why am I seeing more than one object in the output?

That's an advanced feature we developed. We use pre-sandbox static analysis and post-sandbox analysis trying to obtain as many objects linked from the submitted sample as possible. If we find more, we put them into our system for analysis.

For example, if you submit an email file which contains many attachments, not only we test the email file against Outlook, but also the attachments will be exacted, and if any of them is the “legitimate” file types (as defined in our system), they will be put into the system for analysis also.

Therefore, the final detection result will be a combination result of the detections for all the "found" objects, if any of the objects is detected as Malicious, the initial sample will be classified as Malicious.

How long does your system take to process one sample?

It depends on many factors. For example, how many objects will be “found” by the original submitted sample. Usually, for one object, it should take at least 30 seconds to finish. Therefore, you should not check the result within 30 seconds after the submission. Please check our sample code for how to use our API properly.