Frenos: How Local AI Models Are Powering the Future of Cybersecurity
In the world of cybersecurity, speed and precision are everything. A single vulnerability can disrupt an entire city’s power grid or halt production in a refinery. Yet, traditional security models have struggled to keep up with the growing complexity of modern systems.
On the Tesoro AI Podcast, host Darius Gant spoke with Harry Thomas, founder of Frenos, an AI-native operational technology (OT) security platform. What followed was a deep look into how artificial intelligence is redefining the way critical infrastructure defends itself against attacks.
From Nursing to Hacking: The Unlikely Journey of a Founder
Before launching Frenos, Harry Thomas’s career took an unconventional path. He began as a nurse specializing in Alzheimer’s care before transitioning into cybersecurity. Burned out by long hours in healthcare, he returned to school to pursue his passion for computers.
“I started as a penetration tester for financial institutions in Boston,” Thomas explained. “That evolved into working for companies like Security Matters and later Dragos, where we used machine learning for anomaly detection.”
His time at AWS Security proved pivotal. There, he led research on user behavior analytics, using AI models to detect when hackers impersonated employees. The experience helped him see AI’s potential to shift cybersecurity from reactive defense to predictive strategy.
The Birth of Frenos: AI as the Ultimate Security Analyst
The idea behind Frenos emerged from a recurring problem in operational technology: the overwhelming flood of security alerts. “In OT, there’s too much noise. Teams don’t know what to prioritize,” Thomas said.
Frenos uses machine learning and large language models (LLMs) to transform this chaos into clarity. The system builds a digital twin of a company’s network and unleashes a reasoning agent—an AI that acts like a hacker—inside the simulation. The agent identifies weaknesses, chains vulnerabilities together, and predicts how an adversary might exploit them.
“We can simulate how a hacker could move through a network without ever touching the live environment,” Thomas explained. “It’s completely safe, but incredibly realistic.”
This approach allows Frenos to provide proactive security, showing organizations where attacks could happen before they do. Instead of relying on humans to sift through thousands of alerts, Frenos uses AI to surface the few that matter most.
Why Local AI Models Matter
While most AI startups rely on cloud-based LLMs, Frenos took a different route. The platform is fully on-premise, meaning all processing happens locally on customer hardware. This design ensures that no sensitive data ever leaves the organization.
“The Frenos platform isn’t a wrapper around OpenAI or Anthropic,” Thomas clarified. “We run local models with about eight billion parameters. They’re smaller, but faster and safer.”
Local models also address growing concerns about data privacy, compliance, and government access to cloud data. For industries that manage critical infrastructure or personal information—like utilities, healthcare, and public agencies—this local-first approach is becoming essential.
Running models locally comes with challenges, especially in cost and optimization. Frenos fine-tunes each model for specific cybersecurity tasks, ensuring high accuracy despite smaller model size. “If you put our local model up against a larger general-purpose model for our specific use case, ours will win every time,” Thomas said.
The Economics of AI Infrastructure
Building Frenos wasn’t cheap. Training its models cost hundreds of thousands of dollars in experimentation and compute resources. “Training an AI model is not for the faint of heart,” Thomas admitted. “It’s like watching paint dry while burning money.”
He emphasized that startups entering AI need to understand the financial implications. Unlike traditional software, AI systems incur ongoing expenses for API calls, GPU usage, and model retraining. “Every API call adds up fast,” he said. “We chose to build locally so that we could scale more cost-effectively.”
This approach helps clients avoid the sky-high costs of cloud inference, where a single API-heavy workflow can reach tens of thousands of dollars a month.
The Build-or-Buy Dilemma
Many companies approach Frenos thinking they can build a similar platform internally. According to Thomas, that’s easier said than done.
“If you go down the local model route, you’ll need developers who understand model context protocols, retrieval systems, and tool integration,” he explained. “Speed becomes the biggest issue. We can deliver results in under an hour, while an internal system might take days.”
Frenos achieves this speed by combining multiple neural networks—machine learning for rapid computation and LLMs for contextual reasoning. The result is a hybrid approach that balances accuracy and efficiency.
Finding the Right AI Talent
When asked about the kind of people it takes to build a company like Frenos, Thomas highlighted two core traits: adaptability and curiosity.
“I care more about how fast someone learns than what they already know,” he said. “If I give you five research papers, can you summarize them for me by the end of the week? That tells me more than your resume.”
Frenos separates its AI team into LLM specialists and machine learning engineers, recognizing the distinct skill sets each role requires. Ideal candidates have both technical proficiency and domain expertise in cybersecurity. “It’s easier to train a cybersecurity engineer in AI than the other way around,” Thomas noted.
Balancing Innovation and Regulation
As AI regulations tighten worldwide, Frenos’s on-premise architecture gives it a competitive edge. Clients retain full control over their data, avoiding compliance risks tied to external model providers.
“The platform doesn’t send information anywhere. Everything is processed locally,” Thomas said. “That’s how we help clients who handle sensitive data stay compliant and confident.”
He believes that as privacy laws evolve, local AI solutions will become the default for critical industries.
Looking Ahead: The Future Beyond Transformers
Thomas closed the interview with a bold prediction. “Within two years, I think transformer-based large language models will be obsolete. Someone will develop a new architecture that’s smaller, faster, and more efficient,” he said.
As global companies explore new frameworks for AI, Frenos’s innovation demonstrates that bigger isn’t always better. By combining local intelligence with domain-specific expertise, the company is charting a path toward smarter, safer, and more sustainable cybersecurity.
Conclusion
The story of Frenos illustrates a broader shift in AI development. The future isn’t just about building bigger models; it’s about building smarter systems that fit the realities of business, regulation, and security.
From healthcare to hacking to entrepreneurship, Harry Thomas’s journey mirrors the evolution of AI itself—cross-disciplinary, fast-moving, and focused on solving real-world problems.
Frenos shows that when AI meets purpose, it can transform not just technology, but the way industries protect the infrastructure we all depend on.
