in

Enkrypt AI Raises $2.4 Million to Build a Visibility and Security Layer for Generative AI


Generative AI and Large Language Models (LLMs) present an opportunity for enterprises to gain new efficiencies and improve functionality, however, the safety and security of such technology remains an obstacle. Enkrypt AI is today announcing a $2.35 million funding round to solve this problem for enterprises, ensuring their use of generative AI and LLMs is safe, secure and compliant. The seed funding round was led by Boldcap with participation from Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, Builders Fund and angel investors in the AI, healthcare and enterprise space.

Enkrypt AI was founded by two Yale PhDs and AI practitioners Sahil Agarwal (CEO) and Prashanth Harshangi (CTO) in 2022. With Enkrypt AI, enterprises have a control layer between these LLMs and end-users, providing security and safety functionality. Enkrypt AI Sentry has been able to reduce vulnerabilities across a wide range of LLMs, demonstrating a reduction in jailbreaks from 6% to 0.6% in the case of LlaMa2-7B. The Enkrypt AI team has previously developed and deployed AI models across diverse sectors, including the US Department of Defense and various businesses in self-driving cars, music, insurance and fintech.

Enkrypt AI’s Sentry is the only platform that combines both visibility and security for generative AI applications at the enterprise, so that enterprises can secure and accelerate their Generative AI Adoption with Confidence. A leading Fortune 500 data infrastructure company is using Sentry to have complete access control and visibility over all their LLM projects, helping them to detect and mitigate LLM attacks such as jailbreaks and hallucinations, and prevent sensitive data leaks. This is ultimately leading to faster adoption of LLMs for even more use-cases across departments.

“Businesses are really excited about using LLMs, but they’re also worried about how trustworthy they are and the uncertain regulatory landscape,” commented Sahil Agarwal, Co-founder and CEO of Enkrypt AI. “Based on our conversations with CIOs, CISOs and CTOs, we are convinced that for LLMs to be widely adopted, it must be built on a foundation of security, privacy, and compliance. With Sentry, we are merging visibility and security, to ultimately align with and support adherence to regulatory frameworks like the White House Executive Order on AI, the EU AI Act, and other AI-centric regulations, laying the groundwork for safe and compliant AI integration.”

Enkrypt AI is proven to help enterprises accelerate their generative AI adoption by up to 10x, deploying applications into production within weeks compared to the current forecast of 2 years within enterprises. Their comprehensive approach addresses the key concerns causing hesitation among enterprise decision-makers:

  • Delivers unmatched visibility and oversight of LLM usage and performance across business functions.
  • Ensures data privacy and security by protecting sensitive information and guarding against threats.
  • Manages compliance with evolving standards through automated monitoring and strict access controls.

“As the benefits of AI become ever more tangible, so do the risks,” commented Prashanth Harshangi, Co-founder and CTO at Enkrypt AI. “Our platform does more than just detect vulnerabilities; it equips developers with a comprehensive toolkit to fortify their AI solutions against both current and future threats. We’re championing a paradigm where trust and innovation coalesce, enabling the deployment of AI technologies with the confidence that they are as secure and reliable as they are revolutionary.”

The safety of AI has been a key concern for policymakers and experts. Earlier this month, the US Government’s NIST standards body established an AI safety consortium. In an era where generative AI is becoming a transformative force across industries, safeguarding these systems goes beyond best practice – it’s a necessity.

“Our mission at Enkrypt AI is to provide the tools that allow enterprises to not only harness the incredible potential of Generative AI but to do so with the utmost confidence in the security and compliance of their applications,” added Sahil Agarwal. “With the support of our investors and the advanced capabilities of our platform, we are setting a new standard in AI safety – protecting users and organizations against emerging threats while enabling the wider adoption of AI innovations in a responsible manner.”

“We are super excited to be backing practitioners like Sahil and Prashanth who are at the intersection of Security and Generative AI,” said Sathya Nellore Sampat, General Partner at BoldCap. “Enterprise security is non-negotiable. With the explosive growth of Generative AI and LLM usage within companies, the attack surface has dramatically increased. Enkrypt is the command center to control, monitor and have visibility across Generative AI initiatives.”

GitHub's Copilot Enterprise hits general availability

GitHub’s Copilot Enterprise hits general availability

The Humane AI Pin worked better than I expected — until it didn’t

The Humane AI Pin worked better than I expected — until it didn’t