Raleigh, NC

32°F
Broken Clouds Humidity: 60%
Wind: 3.09 M/S

New AI Security Tool Enables Organizations to Define Trust Zones for Gen-AI Models

New AI Security Tool Enables Organizations to Define Trust Zones for Gen-AI Models

Tumeryk Introduces AI Trust Scores and Trust Score Manager to Enhance Gen-AI Security 

Redwood Shores, CA-based startup Tumeryk has unveiled its AI Trust Scores, a tool designed to help organizations assess security risks associated with various generative AI models. Alongside this, the company has also launched the AI Trust Score Manager, a platform that enables businesses to implement and monitor security controls on their AI deployments. 

The AI Trust Scores provide Chief Information Security Officers (CISOs) with a comprehensive evaluation of generative AI models based on nine critical security factors: prompt injection, hallucinations, insecure output handling, security, toxicity, sensitive information disclosure, supply chain vulnerability, psychological safety, and fairness. These scores help organizations make informed decisions when selecting AI models and deploying security safeguards. 

Some findings from the AI Trust Scores highlight unexpected strengths in certain models. For instance, China’s DeepSeek AI model performs exceptionally well in preventing sensitive information disclosure, scoring 910, compared to Claude Sonnet 3.5 (687) and Meta Llama 3.1 405B (557). However, DeepSeek still exhibits risks in areas like prompt injection and hallucination, challenges that many generative AI models initially faced. 

Tumeryk CEO Rohit Valia emphasized the importance of this tool, stating, “CISOs implementing gen-AI solutions struggle to measure their risk due to the non-deterministic nature of AI responses. With AI Trust Scores, they can define Trust Zones and integrate these scores into their security frameworks to receive alerts and log incidents when thresholds are breached.” 

While GPT-4o remains the strongest overall security performer, other models present trade-offs. Meta-Llama-3.2-1B-In, for example, provides open-source security but shows variability in risk handling. DeepSeek AI, though strong in logical reasoning, remains vulnerable to prompt injection and hallucinations—issues expected to improve as the model evolves. 

The AI Trust Score Manager goes beyond evaluation by providing real-time insights into AI system performance, identifying vulnerabilities, and offering actionable recommendations for improving security and compliance. “This tool helps organizations proactively manage AI-related risks while ensuring alignment with regulatory standards and ethical guidelines,” Valia added. 

By integrating these solutions, organizations can enhance AI security, minimize risk exposure, and make data-driven decisions when deploying generative AI models. 

He further elaborated, “When a user or AI agent interacts with a large language model (LLM) secured by the AI Trust Score Manager control layer, a real-time AI Trust Score is generated for the response. Based on predefined policies—established using AI Trust Score thresholds or written in Nvidia Conversational Language (Colang)—access to the response is either granted or denied. This approach is similar to how the Fair Isaac Corporation detects credit card fraud by evaluating billions of transactions with a FICO score based on multiple risk factors.” 

Image

With Cybersecurity Insights, current news and event trends will be captured on cybersecurity, recent systems / cyber-attacks, artificial intelligence (AI), technology innovation happening around the world; to keep our viewers fast abreast with the current happening with technology, system security, and how its effect our lives and ecosystem. 

Please fill the required field.