Cybersecurity no longer breaks because of weak tools or slow response. It breaks because human reasoning cannot operate continuously at machine speed.
In this interview, Glemad founder David Idris explains why incident-based security models have reached their limit and why continuous AI reasoning is becoming unavoidable infrastructure.
1. Looking back at Glemad’s shift from consulting to building ADT, was there a specific moment when you realized human-led security simply could not scale against AI-driven attacks?
David Idris: Yes. The realization came when security stopped being about incidents and became about continuous interpretation.
In consulting, you respond to events. In modern infrastructure, there is no pause between events. Cloud workloads change constantly. Identities rotate. Permissions drift. Attackers exploit that motion rather than a single vulnerability.
At that point, security teams were no longer failing because they lacked tools or expertise. They were failing because the system demanded uninterrupted reasoning at machine speed. Humans cannot continuously interpret logs, behavior, policy, and risk across a live environment without becoming the bottleneck.
That is when we stopped optimizing response workflows and started building intelligence that could reason continuously. ADT was born out of that constraint.
2. You’ve said you don’t view AI with “starry-eyed fascination,” but as survival infrastructure. What is one popular AI feature or trend you deliberately killed from your roadmap because it failed that test?
David Idris: We deliberately killed chat-first security.
It looks impressive in demos. You ask a question, the AI explains something, and it feels productive. But in real security operations, conversation often introduces delay instead of resolution.
Security does not fail because teams lack explanations. It fails because decisions arrive too late or without sufficient verification. If AI cannot shorten time-to-containment while remaining auditable, it is not infrastructure.
We still use language where it adds clarity, but it cannot be the core. Defense systems must reason, verify, and act within constraints. Anything else is cosmetic.
3. Your recent $500K raise went explicitly into GPU capacity, not growth marketing. Why does owning compute matter more than renting APIs at this stage of autonomous defense?
David Idris: Because autonomous defense is a systems and model problem, not a wrapper problem.
Owning compute gives us control over training data, evaluation, latency, failure analysis, and iteration speed. It allows us to train security-native models under predictable conditions and deploy them in regulated environments without dependency risk.
Renting APIs makes sense when you are building features. It breaks down when you are building infrastructure that must be dependable, explainable, and sovereign.
At this stage, compute is leverage. Marketing can follow.
4. Most “AI security” products today essentially wrap general LLMs. What fundamentally breaks when you apply a general-purpose model trained on internet text to live security decisions that require reading system logs?
David Idris: The core issue is context mismatch.
General LLMs are trained to be fluent and helpful across broad language tasks. System logs are structured, temporal, and adversarial. They represent state transitions, not prose.
When a model trained primarily on internet text is applied to logs, it often produces plausible explanations that are not operationally correct. In security, plausibility is dangerous. Decisions must be grounded in evidence, policy, and system state.
ADT models are trained and evaluated specifically for security reasoning. They are built to interpret behavior, correlate signals, and operate under constraints. That difference matters when the cost of error is real.
5. How do you prevent probabilistic models from making dangerous configuration changes or response decisions in live environments?
David Idris: By never giving them unchecked authority.
In our architecture, reasoning, decisioning, and actuation are distinct layers. The model reasons and proposes. Policy engines verify. Guardrails enforce limits on scope, reversibility, and blast radius.
High-impact actions require multi-signal confirmation and policy alignment. Default responses prioritize containment and verification before irreversible change.
Autonomy does not mean freedom. It means constrained intelligence operating within clearly defined boundaries.
6. Where exactly is the line between “automation” and “autonomy” in PulseADT, and how do you stop an autonomous system from overreacting to a false positive?
David Idris: Automation executes predefined scripts. Autonomy decides whether execution is appropriate.
PulseADT does not respond to isolated alerts. It reasons across correlated evidence, historical context, identity behavior, workload activity, and policy drift. Actions are staged. Contain first. Verify next. Escalate only when confidence is sufficient.
Overreaction is prevented by design. The system is optimized to reduce harm under uncertainty, not to maximize aggression.
7. When PulseADT blocks an action or isolates a host in a regulated environment like banking, what evidence trail exists for auditors? Is there a “chain-of-thought” log?
David Idris: We do not expose raw chain-of-thought.
Auditors do not need internal model deliberation. They need accountability. PulseADT produces structured, audit-ready records that show observed signals, evaluated controls, policy alignment, risk scoring, actions taken, and outcomes verified.
Every decision is traceable, reproducible, and defensible without exposing sensitive model internals. That is what regulated environments require.
8. How much of your customer demand, especially in the Global South, is driven by data sovereignty rather than pure security performance?
David Idris: A substantial portion.
Many organizations are less concerned about theoretical breach scenarios than about where their data goes during normal operations. Sovereignty is operational, not ideological.
PulseADT is designed to run locally, reason locally, and keep sensitive telemetry within jurisdiction. For regulated institutions, that capability often determines whether adoption is possible at all.
9. You previously explored local language support through Dara AI. Did that work show that linguistic nuance actually improves detection of social engineering and insider threats, or is it still underestimated by the western-centric security industry?
David Idris: Linguistic nuance is still underestimated, particularly in social engineering.
Dara AI was released as our early security advisor interface. It sat between organizations and PulseADT, intercepting intent and removing the technical burden of log interpretation for teams without deep security expertise.
We later discontinued Dara AI because we realized the long-term solution was not a chat layer. The intelligence needed to move deeper into the system itself.
The insight remains valid. Language context matters in detecting persuasion, fraud, and insider risk. But the product form must be infrastructure, not conversation.
10. Deloitte flags “Shadow AI” as a top risk for 2025. How should enterprises realistically govern employee AI use without killing productivity?
David Idris: You cannot govern what you do not provide.
Shadow AI emerges when employees have real needs and official tools lag behind. The solution is not blanket bans. It is sanctioned alternatives with clear policies, visibility, and data controls.
Enterprises should focus on approved environments, risk-based governance, and observability. Governance must feel like enablement, not obstruction.
11. Can deep AI infrastructure talent actually be found outside traditional tech hubs, or did Glemad have to create that pipeline itself?
David Idris: The talent exists. The pipeline does not.
We had to invest heavily in internal training, mentorship, and real systems ownership. Geography does not limit capability. Lack of exposure does.
Building the pipeline was not optional. It was necessary to operate at the model and architecture level rather than just integration.
12. If autonomous attacks are inevitable, what role do humans realistically play in cybersecurity five years from now?
David Idris: Humans become governors of systems, not operators of alerts.
Machines will handle detection, correlation, and containment. Humans will define intent, policy, and acceptable risk. They will audit outcomes and refine constraints.
Security will move from reactive operations to system design. Autonomous defense handles the motion. Humans remain accountable for the rules.
That is the future we are building toward at Glemad.
Editor’s Note
This interview argues that modern cybersecurity has crossed a threshold where human judgment is no longer scalable, making continuous, constrained AI reasoning a structural requirement rather than an optimization.
