• 'Not just development tools': Security experts discover critical

    From TechnologyDaily@1337:1/100 to All on Tuesday, March 31, 2026 17:15:26
    'Not just development tools': Security experts discover critical flaw in OpenAI's Codex which could compromise entire enterprise organizations

    Date:
    Tue, 31 Mar 2026 14:25:00 +0000

    Description:
    Researchers managed to steal GitHub OAuth tokens by abusing a command injection vulnerability.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter BeyondTrust Phantom Labs finds critical command injection flaw in OpenAIs ChatGPT Codex Vulnerability let attackers steal GitHub OAuth tokens via malicious branch names OpenAI patched with stronger input validation, shell escaping, and token controls Experts have claimed OpenAIs ChatGPT Codex carried a critical command injection vulnerability which allowed threat actors to steal sensitive GitHub authentication tokens.

    This is according to BeyondTrusts research department, Phantom Labs, whose work helped OpenAI identify and patch the flaw. ChatGPT Codex is a coding feature within the famed chatbot that helps users write and edit software using plain-language instructions. Users can turn human-language requests
    into working code or can suggest fixes and improvements the same way. Article continues below You may like OpenAI releases Codex Security to spot the next big cyber risks to your company, promises to 'identify complex
    vulnerabilities that other agentic tools miss' This 'ZombieAgent' zero click vulnerability allows for silent account takeover - here's what we know 'A human-chosen password doesn't stand a chance': OpenClaw has yet another major security flaw here's what we know about "ClawJacked" How to govern AI agents When a developer makes changes to a GitHub project, they do it in their own copy, which is a separate branch of the project. Now, according to
    BeyondTrust Phantom Labs, the problem stems from the way Codex processes branch names during task creation.

    Apparently, the tool allowed a (malicious) actor to manipulate the branch parameter and inject arbitrary shell commands while setting up the environment.

    These commands could run any code within the container, including malicious ones. Phantom Labs said they were able to pull GitHub OAuth tokens this way, gaining access to a theoretical third-party project, and using the tokens to move laterally within GitHub.

    Unfortunately - it gets worse. Codexs command-line interface, SDK, and development environment integrations were all flawed in the same way, and the researchers said that by embedding malware into GitHub branch names they
    would be able to compromise numerous developers working on the same project. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    After responsibly disclosing the findings to OpenAI, the company fixed the problem with improved input validation, stronger shell escaping protections, and better controls over token exposures inside containers. Token scope and lifetime during task creation were also limited, it was said.

    AI coding agents are live execution environments with access to sensitive credentials and organizational resources, the researchers concluded.

    Because these agents act autonomously, security teams must understand how to govern AI agent identities to prevent command injection, token theft, and automated exploitation at scale. As AI agents become more deeply integrated into developer workflows, the security of the containers they run inand the input they consumemust be treated with the same rigor as any other
    application security boundary. The best antivirus for all budgets Our top picks, based on real-world testing and comparisons

    Read our full guide to the best antivirus 1. Best overall: Bitdefender Total Security 2. Best for families: Norton 360 with LifeLock 3. Best for mobile: McAfee Mobile Security Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

    And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.



    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/not-just-development-tools-security-exp erts-discover-critical-flaw-in-openais-codex-which-could-compromise-entire-ent erprise-organizations


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)