AI Innovations in DevSecOps, Construction, and Governance: Spotlight on BenchPrep, Modalina AI, and IBM's watsonx.governance

June 21, 2025

AI Innovations in DevSecOps, Construction, and Governance: Spotlight on BenchPrep, Modalina AI, and IBM's watsonx.governance image

How To Look At DevSecOps With AI: The New Agents Approach (MCP, ACP, A2A)

Venkatadri Marella, Lead DevOps Engineer at BenchPrep, discusses how emerging AI techniques, founded on large language models and multi-agent systems, are revolutionizing DevSecOps, specifically through multichain processing (MCP), agentic control plane (ACP), and agent-to-agent (A2A) orchestration. These methods allow dynamic governance and automation of infrastructure across environments like cloud and Kubernetes, shifting traditional pipeline configurations to role-based agent decision-making. This includes pre-deployment threat detection, self-healing IaC, and compliance through agent collaboration. Challenges include ensuring auditability, secure access, and consistent alignment with policies. The integration of AI agents transforms IaC from static definitions into intelligent infrastructure, offering unprecedented flexibility and security in software development life cycles. (Source)

After Crossing Continents and Industries, Egor Folley Is Contributing to the Future of AI

Egor Folley, a 27-year-old AI entrepreneur, is revolutionizing the construction industry through his company Modalina AI, a no-code platform designed to turn raw visual data into actionable insights, enabling managers to identify risks and make informed decisions. With prior experience at ARTIAL, where he adapted drones for autonomous navigation, Egor honed his focus on addressing practical needs in the construction sector. Modalina AI currently boasts three paying customers and significant momentum in the market, targeting to save substantial labor-related costs and increase operational efficiency. Supported by Global Detroit as a Global Entrepreneur-in-Residence, Egor mentors emerging startups and aims for Modalina to expand into other sectors like infrastructure and healthcare, with the long-term vision of becoming a $10 billion company. (Source)

IBM combines governance and security tools to solve the AI agent oversight crisis

IBM is enhancing its AI governance landscape by integrating its tool, watsonx.governance, with Guardium AI Security to streamline AgentOps for enterprises. AgentOps, crucial for managing agent development lifecycle, is challenged by the proliferation of tools as vendors rush to equip enterprises to create diverse AI agents. This integration aims to alleviate tool sprawl by offering unified governance and security controls, reducing risks like shadow agents. Heather Gentile from IBM emphasizes the importance of integrated oversight for AI projects. However, enterprises need to deploy both watsonx.governance and Guardium AI Security to benefit from this integration, as noted by Vishal Kamat, IBM's VP of data security. (Source)

AI agents win over professionals - but only to do their grunt work, Stanford study finds

AI agents are increasingly popular in Silicon Valley for boosting business productivity, but their acceptance by individual workers hinges on not infringing too much on human agency, according to a new Stanford University study titled "Future of Work with AI Agents." This research moves past AI hype to explore practical integration into daily work routines, finding that most workers are open to AI automating low-stakes, repetitive tasks, thereby allowing them to focus on more meaningful work. The study, which involved interviews with 1,500 professionals and AI experts, resulted in the creation of the AI Agent Worker Outlook & Readiness Knowledge Bank (WORKBank), highlighting a gap between what tasks AI is deployed for and what workers wish to automate. It also identified a growing need for interpersonal skills over analytical ones, as AI increasingly handles information-processing tasks. The research emphasizes a preference for maintaining human control, suggesting potential friction as AI becomes more widespread in the workplace. (Source)

2025 is NOT the Year of AI Agents

Andrej Karpathy, former head of AI at Tesla, explored the evolving landscape of software development with large language models (LLMs) in his talk at YC AI Startup School. He outlined a new categorization of software into three stages: traditional programming (Software 1.0), neural network usage (Software 2.0), and the latest, Software 3.0, where LLMs are directed through natural language prompts. Karpathy emphasizes the importance of maintaining human oversight as LLMs become partially autonomous, suggesting developers create augmented systems rather than fully autonomous agents. He advocates for new user interfaces tailored to these "people spirits" on the internet, comparing LLM's development to semiconductor manufacturing. Despite LLMs currently operating like centralized mainframes due to cost, Karpathy envisions them as a new standard of computing, providing a programmable intelligence layer for the internet. He also highlights applications like MenuGen and tools like Perplexity AI that exemplify intelligent orchestration with human-in-the-loop systems, underscoring the potential and limitations of modern AI technologies. (Source)
Devra Logo

Devra: AI software and data science assistant