The Open Cybersecurity Schema Framework (OCSF) keeps moving forward with the 1.2 release, which makes some important advances based on inputs from its ever-growing community. I will briefly summarize what’s new later in this blog, but as always, I encourage interested parties to check out the schema at schema.ocsf.io and github.com/ocsf. A full list of the additions and enhancements can be found in the CHANGELOG.md file.
One of the more interesting features of the framework is how profiles can augment the natural structure of event classes and categories. I’m going to describe them in some detail here.
I’ve mentioned OCSF Profiles in previous blogs, but I want to go into more detail here, as they are becoming more important and sometimes misunderstood as to how they can be constructed. There are four ways of modeling using profiles:
An OCSF Profile is a framework construct that cuts across categories and classes to augment classes and objects with focused ‘mix-in’ attributes that better describe aspects of activities and findings in certain situations. Rather than have an explosion of classes that combine attributes for these situations, profiles are an elegant way of reusing the semantics of fundamental classes without extending them with new classes. If you are a Java or C++ developer, they will resemble implementing additional interfaces on top of a class, and similarly, in OCSF, Profiles are an event type that cuts across event classes.
Hence a profile is two things: a mix-in attribute set and an alternate typing of the event class or object where it is registered. This is accomplished via a “profiles” array at the head of the class or object. The OCSF schema server will take care of filtering or augmenting classes and objects appropriately. In this way, a related set of attributes can be added selectively independent of class or category when its type cross-cuts the structural taxonomy. For example, the Host profile can be applied to the Network Activity category classes for host-based network activity coming from an EDR security agent. Querying on events WHERE “Host” IN metadata.profiles[] retrieves all events from the System Activity category and the Network Activity classes.
{
"description": "The attributes that identify host/device attributes.",
"meta": "profile",
"caption": "Host",
"name": "host",
"annotations": {
"group": "primary"
},
"attributes": {
"device": {
"requirement": "recommended"
},
"actor": {
"requirement": "optional"
}
}
}
The most common way of designing and using a profile is to define it in the metaschema profiles folder via a profile name and the profile attributes, as above; then declare the profile in the class or object, and finally include the profile to bring in its attributes, as below; the attributes will be added when the profile is applied to an event class or object.
{
"caption": "Network",
"category": "network",
"description": "Network event is a generic event that defines a set of attributes available in the Network category.",
"extends": "base_event",
"name": "network",
"profiles": [
"host",
"network_proxy",
"security_control",
"load_balancer"
],
"attributes": {
"$include": [
"profiles/host.json",
"profiles/network_proxy.json",
"profiles/security_control.json",
"profiles/load_balancer.json"
],
...
This is the augmentation profile approach. When the profile is enabled in the schema browser, the respective classes and objects are augmented with the profile attributes, and schema samples will include the profile name in the metadata.profiles[] array, effectively typing the event or object as a kind of the profile.
{
"type_name": "Network Activity: Open",
"activity_id": 4,
"type_uid": 400104,
"class_uid": 4001,
"category_uid": 4,
"class_name": "Network Activity",
"metadata": {
"version": "1.1.0",
"profiles": ["host"]
}
"category_name": "Network Activity",
...
All events matching the profile will be returned if an event is queried by its profile name, irrespective of class or category. However, there are three other ways to use profiles in the schema.
The second approach is where the attributes of a profile definition are already natively defined within the event class or object. Think of this as the built-in or native profile approach. For the profiles system and typing to be consistent, those classes and objects must declare the profile within the class as with the augmentation approach. Still, there is no need to include the profile in the attributes section since those attributes (in the case of the Host profile, actor and device) are already defined there.
{
"caption": "System Activity",
"category": "system",
"extends": "base_event",
"name": "system",
"profiles": [
"host",
"security_control"
],
"attributes": {
"$include": [
"profiles/security_control.json"
],
"actor": {
"group": "primary",
"requirement": "required"
},
"device": {
"group": "primary",
"requirement": "required"
}
}
}
...
What happens when only some of the attributes of the profiles are native to an event class or object? This is the partially native profile approach. Using the augmentation profile approach, where the profile is $included into the class or object, the schema server will remove the native attributes when the profile is not applied, which isn’t what you would want. For these cases, a “profile”: null statement should be added to the potentially affected native attribute, which tells the server to leave it alone regardless of the profile application. In the example below, actor is native to the Authentication class, but device is not. When the profile is applied, only device will be added, and when not applied, actor will stay put.
{
"caption": "Authentication",
"extends": "iam",
"name": "authentication",
"uid": 2,
"profiles": [
"host"
],
"attributes": {
"$include": [
"profiles/host.json"
],
"actor": {
"description": "The actor that requested the authentication.",
"group": "context",
"profile": null
},
...
Finally, what if a class or object wants to be considered as part of the profile family but wants to add new attributes that are only relevant to the one particular class or object? This may sound a bit esoteric, but it has already been used in the resource_details object for the Cloud profile. When the Cloud profile is applied to classes with attributes of the resource_details object type, for example, API Activity, the cloud_partition and region attributes defined within the object are added, but only when the Cloud profile is applied to the class. The event now includes the api and cloud attributes, while the resource_details object of the class adds the other two attributes — effectively creating a custom hybrid profile.
If you $included the profile attributes, as with the augmented profile, you would also get the Cloud profile’s attributes in the object as well as the class. You don’t want to duplicate those attributes applied by the profile to the class into the objects too. To make the object’s native attributes aware of the profile (such that the server switches them on, and the event validator won’t complain), you add “profile”: <profile name> within your object’s attribute clause, as well as the usual declaration within the profiles array at the head of the class or object.
The example below assigns the Cloud profile to the specific native attributes cloud_partition and region of the Resource Details object. These attributes are not part of the Cloud profile definition, so only this specific object will include them when the Cloud profile is applied to its enclosing class. In this way, applying a profile can add its attributes to a class, and different attributes can be added to an object within that class.
{
"caption": "Resource Details",
"extends": "_resource",
"name": "resource_details",
"profiles": ["cloud"],
"attributes": {
"agent_list": {
"requirement": "optional"
},
"cloud_partition": {
"profile": "cloud",
"requirement": "optional"
},
"owner": {
"description": "The service or user account that owns the resource.",
"requirement": "recommended"
},
"region": {
"description": "The cloud region of the resource.",
"profile": "cloud",
"requirement": "optional"
},
...
Perhaps the most important schema change from the 1.0 release is the work contributors have made to the Findings category beginning with OCSF 1.1. You may remember that the original Findings class was the Security Finding, which covered a lot of use cases but without a lot of specialization. This class is still supported but has been deprecated in favor of four new classes: Detection Finding, Vulnerability Finding, Compliance Finding, and Incident Finding. These new classes were developed based on experience with the schema by various vendors as well as the need for additional use cases.
All the classes share the finding_info attribute and extend an internal finding class to maintain consistency, which is one of the goals for OCSF schema development. The names Detection Finding, Vulnerability Finding and Compliance Finding should give you a hint as to how they are intended to be used. Detection Finding is the closest to the original Security Finding and is the result of an analysis, described by the Analytic object, of one or more events and alerts, indicating a potential threat. MITRE ATT&CKTM Tactics, Techniques and Subtechniques support has been improved as well.
Vulnerability Finding is somewhat different in that it represents the result of a scan (but not the scan process) that found known vulnerabilities or exposures. Compliance Finding is based on a violation of a policy. Any of these findings may be indicative of an incident in the environment, and that is where Incident Finding comes in.
Incident Finding aggregates any of the other findings, along with their contributing events, and adds attributes for the verdict of the findings and incidents. In OCSF 1.2 the Data Security Finding class was added, handling a slew of data security products such as Data Loss Prevention, Data Classification, Data Security Posture Management and more.
The Security Control profile has been updated to better represent more control enforcement events, particularly access control decisions for network and datastore activities. The combination of Detection Findings and the Security Control profile models typical alerts coming from security detection and enforcement points, such as endpoint security or layer 7 firewalls with UTM functionality.
The Network Proxy profile is another example of augmenting common network activities in the presence of proxy servers or NAT addressing. It adds several proxy oriented attributes to any of the Network Activity categories of classes. The Load Balancer profile adds attributes that model the distribution of traffic across the network of server endpoints.
I’m excited about the members' interest in going beyond the schema aspects of OCSF and delve deeper into the framework. In light of that, we have created three new workstreams: Encodings, Mappings, and Metaschema tools.
The Encodings workstream is focused on serializing OCSF events and objects in wire protocols and for data lake storage. The normative form of OCSF metaschema, schema, and resulting events is JSON; however, in practical usage, efficient wire protocols such as protobuf and columnar-based storage formats such as Parquet are prevalent in the security community. This workstream aims at formalizing recommendations on how to encode OCSF events.
The Mapping workstream is focused on standardizing how popular event sources can conform to OCSF. Even with a standardized schema that attempts to reduce ambiguity, there is enough room for variation in how an event producer or event mapper populates the schema. In addition, members find themselves redundantly solving the same problem. So the Mapping workstream and the OCSF Examples repository aim to align at a layer above the schema. And since most vendors and event consumers still need to map events during a streaming process, Splunk donated its own Java parser and translator rules engine to the community in the ocsf-tools repository. There are other options as well, but being able to share mappings using a standard approach is what ultimately we are striving for.
The Metaschema workstream is focused on the framework's correctness and tools that help schema developers, including their extensions, validate their work. The OCSF Schema Server has a set of APIs to this end, but new tools such as the ocsf-validator, which began with a donation from one of our member companies, have added more granular tools that can be used in CI/CD pipelines or other applications.
If you are interested in what’s on tap for OCSF 1.3, you can join the Slack workgroup by sending an email request to info@ocsf.io and check out the PRs in the GitHub ocsf-schema repository. We are working on STIX integration, MITRE D3FEND (TM) integration for remediation, and continued additions and improvements to the core schema.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.