• Many Forms, Many Options

    There is no universally optimal cybersecurity approach

    Cyber risks come in many different forms. Some risks are of our own making, others come from third parties, and still more come from nature itself. This means that your optimal cybersecurity program will differ from those of other companies. It must be designed to work in tandem with your teams, your processes, your partners, and your customers. This is how we approach every project - the outcome must be effective, mitigating risks while allow the business to operate well.

    Not all vulnerabilities should be mitigated, not all mitigations involve systems

    It is probably obvious, but just because a vulnerability exists you don't necessarily invest effort to mitigate it. You have three reasonable options for handling vulnerabilites: mitigate, transfer, and accept (note that ignore isn't a valid option). We often focus on mitigations, how to prevent or reduce the impact of a vulnerability. However, in everyday life we transfer (for example, purchasing insurance) and accept (for example, letting a tree grow near our house) risks. We must do the same with regards to cyber risks, otherwise the cost of security would outpace the business benefits. The goal of an effective cybersecurity program is that it balances the business risks and rewards which can only be done if the organization understands the risks' potential costs and then balances those in the context of its operations.


    This is why we focus on identifying and quantifying your unique set of cybersecurity risks as the first step in a project. It positions you to make informed business-driven decisions regarding cyber-related investments.

  • Better Cyber Tooling Through AI and Semantics

    In our spare time, we like to explore what is new in cyber security. Here is one area that we believe holds promise


    Data related to cybersecurity comes in may different forms. Recognizing abberent human or system behavior requires an intense focus and constant monitoring and learning. These two facts have led us to spend time researching both Artificial Intelligence (AI, an over-hyped and somewhat meaningless term these days) and Graph Databases (along with Semantics, defining the meaning of data).


    Graph databases are the perfect environment when you have disperate data sources, formats and schemas. Graphs are not impacted by changing relationships between data elements. As our understanding of data relationships evolve or new data sources require altering a multiplicity (e.g. 1-to-1 becomes 1-to-many), a graph database puts a smile on our face as we add the new connection without having to change any other aspect of the data's structure.


    The addition of semantics gives us a powerful ally in the form of logic available within our database. This capability, based largely on the W3C OWL standard, makes connecting data between different systems a no-risk scenario. There is no heavyweight refactoring to do. We can connect the concepts using OWL and if we decide we have it wrong, we just remove the connection. No harm, no foul and no time wasted creating ETL scripts and loading data into rigid structures.


    Combining these features with a machine learning environment (a common AI approach) holds promise for moving AI beyond its somewhat limited "learning" abilities to a platform that would be able to recognize and classify novel situations, a capability which would be incredibly useful in the field of cybersecurity. This area of research is interesting and is one that we are actively pursuing.

  • Honeypots

    A honeypot is a server running specialized software applications known as sensors. Each sensor is designed to identify specific types of access attempts to the server. The server has no business-related software installed. There are no links to the server from other systems and no users would have any reason to connect to the server. Therefore, any attempt to access the honeypot server is regarded as an attack, since the access attempt cannot have a legitimate purpose.

    The honeypot server is placed on a network (inside or outside a corporate firewall) and the various sensors report connection attempts when they occur. Typically a central server is used to collect and aggregate the information from the various honeypot servers and sensors.

    The purpose of running honeypots is to collect information related to how attackers attempt to break into servers. Used on an internal network, a honeypot can help to identify infected computer systems or rogue workers.

    If you are interested, there are a variety of open source honeypot projects. One good place to get started is the the Modern Honey Network (MHN) (https://threatstream.github.io/mhn/). This open source platform greatly simplifies the process of setting up and operating a honeypot. We have used honeypots managed with the MHN software for research and customer projects.

    Additionally, we have made our research-based data available. All of the data collected was obtained using the servers and sensor configurations supplied by the MHN. The data is available for download at these links:
    Honeypot data as ZIP file
    Honeypot data as tar.gz file
    When you uncompress the archive you will see a readme.txt file with basic information, a license.txt file with the Affero GPL license information and a honeypot.data.readme.txt file with in depth documentation regarding importing and interpreting the data.