Microsoft open-sources Counterfit, an AI security risk assessment tool
3 min read
Microsoft's Counterfit toolkit aims to enable security teams to more easily test the robustness of AI systems both in and prior to production.
Learn how open clouds reduce latencies to client device, improve customer and device (IoT) interactions and speed up innovations from the edge to the data center.

Microsoft today open-sourced Counterfit, a tool designed to help developers test the security of AI and machine learning systems. The company says that Counterfit can enable organizations to conduct assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.

AI is being increasingly deployed in regulated industries like health care, finance, and defense. But organizations are lagging behind in their adoption of risk mitigation strategies. A Microsoft survey found that 25 out of 28 businesses indicated they don't have the right resources in place to secure their AI systems, and that security professionals are looking for specific guidance in this space.

Microsoft says that Counterfit was born out the company's need to assess AI systems for vulnerabilities with the goal of proactively securing AI services. The tool started as a corpus of attack scripts written specifically to target AI models and then morphed into an automation product to benchmark multiple systems at scale.

Under the hood, Counterfit is a command-line utility that provides a layer for adversarial frameworks, preloaded with algorithms that can be used to evade and steal models. Counterfit seeks to make published attacks accessible to the security community while offering an…
Kyle Wiggers
Read full article