Reference

Glossary

Definitions of the terms Focus FS uses to describe operational intelligence, embedded AI, and industrial software. Each entry is a single source of truth for the term across our platform, documentation, and conversations with customers.

Operational Intelligence
A category of software that connects field workflows, asset data, and decision-making into a single live system. Distinct from generic ERP or business intelligence — focused on real-time operational decisions in high-consequence environments.
Applied AI Guardrails
Focus FS's framework of security and governance controls built into AI systems from the start, not added as a compliance afterthought. Includes data isolation, role-based access, project isolation, source attribution, confidence indicators, and full audit trails.
Embedded AI
AI deployed directly inside operational workflows (incident reports, inspections, risk assessments) rather than as a separate parallel system. Designed to augment human expertise, not replace judgment.
Configurable Workflows
Operational processes that can be adapted without custom development. Standardized and automated for accuracy and consistency in data, actions, and reporting.
Project Isolation
A security control ensuring that even when an AI system can answer a question, it must verify whether the user is authorized to see the answer. Permissions extend into AI responses, not just feature access.
Role-Based Access Control (RBAC)
An access control model where permissions are assigned to roles rather than individual users. Users inherit permissions from their assigned roles, supporting customizable permission levels from company and project down to modules and reports.
Data Isolation
Strict separation of organizational data between tenants in a multi-customer cloud environment. Customer operational data is never used to train another customer's model.
Audit Trails
Detailed, immutable logs of which users created or updated records in the system. Critical for compliance, regulatory review, and incident investigations.
Source Attribution
A traceability feature that links AI-generated insights back to the specific operational records, inspections, or incidents the conclusions were derived from. Essential for trust in high-consequence environments.
Confidence Indicators
Numerical or visual signals attached to AI responses that communicate how reliable the system considers a given output. Helps operators distinguish between high-certainty and exploratory answers.
Single Sign-On (SSO)
An enterprise authentication mechanism allowing users to access multiple applications with one set of credentials, integrated with the organization's identity management system.
Asset-Intensive Operations
Industrial operations characterized by significant capital equipment — drilling rigs, mining fleets, process plants, offshore platforms — where uptime and reliability of physical assets directly determine business outcomes.
Safety-Critical Operations
Operations where failure of equipment, processes, or decision-making can result in injury, environmental damage, or loss of life. Requires rigorous workflow standardization, audit trails, and regulatory compliance.
Operational Knowledge
Institutional information generated across reports, inspections, incident records, and operational notes. Often dispersed across systems and difficult to surface when decisions need to be made.
Data Drift
Gradual change in the underlying data distribution that an AI model was trained on — caused by evolving processes, new equipment, or shifting terminology. Mitigated through scheduled model revalidation and clear data ownership.