One of the most difficult challenge is detecting threats that come from within; most tools in place are designed to protect an infrastructure from outside dangers rather than monitoring what’s going on within the firewall. Insider threats fall into three broad groupings. The first are those carried out by employees with malicious intent. They might be looking to establish a new source of income by selling valuable data to a competitor. The second group are termed ‘compromised insiders’. These are people who may unwittingly have a device that has become infected with malware. The third group is known as the ‘accidental insiders’. These are staff who inadvertently release sensitive data to a third party via email or perhaps leave an unsecured laptop in the back of a taxi. Sharelock makes use of the rapid advances being made in Artificial Intelligence and Machine Learning to assist security teams and business managers in overcoming the challenge.

With the self-audit feature, end users, business partners and customers have the opportunity to view their risk-scored behavior for self-analysis. They can view devices, access, activity, and risk-scored anomalies through convenient and regular reporting. This enables them to participate in a collaborative partnership with the organization to protect assets. They also provide rich context not found within a SOC to quickly identify anomalous behavior and feedback on any false positives to improve machine learning model accuracy.



Taking a risk-based approach to an identity’s access management and processes, department leaders gain assurance that the sensitive intellectual property their teams work with is secure from excess access and access outliers. This heightens departmental productivity, allowing the team to focus more effectively on organizational goals and objectives and eliminates rubber-stamping of certifications and access cloning. In addition, the self-audit fosters a sense of engaged understanding of security requirements and best practices and a sense collective involvement among group members to help protect the organization.

Through UEBA risk scoring, analysts detect insiders, account compromise, data exfiltration and external intruders. Context-aware analysis facilitates multi-level analyst reviews with data masking through workflow based on role-based access controls. Process empowerment is available through canned Machine Learning models as well as case and ticket management internally, or interfacing with solution applications such as Remedy, ServiceNow, and SFDC plus email or SMS alerts.



They are enabled by advanced security analytics solutions offering vendor agnostic capabilities to create custom Machine Learning models without coding in a step-by-step process requiring only a minimal knowledge of data science. As an open solution, advanced security analysts and data scientists can adjust risk weightings within existing Machine Learning models, add or update attributes to the metadata model, and ingest desired attributes from any data source with a flex data connector. Data science teams and architects can use their data lake of choice to extend Machine Learning model use cases.