Jacques Nack: Ending “Random Acts of AI” with Strategy, Not Hype

Companies everywhere are jumping on the AI bandwagon without much thought about strategy or consequences. This rush to adopt artificial intelligence tools often backfires, creating more problems than solutions. Jacques Nack has watched this phenomenon unfold from his position as CEO of JNN Group, an AI-powered audit and compliance platform, and he’s coined a term for it: “random acts of AI.”

How to Avoid “Random Acts of AI” in Your Organization

Most business leaders have dabbled with ChatGPT or similar tools by now. Maybe you’ve used it for research, writing emails, or brainstorming ideas. That’s fine for personal use, but when companies start deploying AI without proper planning, things can go wrong quickly. Nack has been watching this trend with growing concern. “Technology that is now so ubiquitous in our daily work lives that it’s very hard to imagine that generative AI was only introduced in our lives in late 2022 with OpenAI’s first versions,” he points out. Yet here we are, barely two years later, and businesses are treating AI as any other software purchase.

Building Strategy Before Choosing Tools

The comparison Nack makes is striking. “Random acts of AI really happen when a company decides to just adopt something that sounds good from a marketing standpoint—it’s the equivalent of self-medication without reading the labels or the precautions,” he explains. You wouldn’t take medication without understanding the side effects, yet companies are doing exactly that with AI tools. This isn’t just about wasted money or poor performance. There are real consequences when AI systems hallucinate, show bias, or fail in critical business processes. The difference is that unlike a headache pill, AI failures can affect customers, compliance requirements, and entire business operations.

By now, businesses should know better. “When we talk about aligning AI to your business processes in 2025, we have a better understanding of the kinds of outputs we can get from AI pipelines,” Nack says. “We should use that information to align our design or AI strategy with the business objective we’re trying to reach.” This means stopping the toolkit mentality. Companies aren’t trying to collect AI tools trading cards. They need to figure out what business problems they’re actually trying to solve, then find the right AI approach for those specific challenges. Return on investment becomes much clearer when you start with the business goal instead of the technology.

Monitoring AI for Reliable Results

Nack recommends updating company policies to address AI use directly. “Update your policies. There are now several frameworks available today from The NIST to ISO 42000,” he advises. These aren’t just bureaucratic exercises. They help organizations evaluate which AI initiatives actually make business sense and which ones are just expensive experiments. The key areas that need attention include data governance, explainability, and acceptable use policies. Companies also need to assign clear responsibility for AI deployments. “We need to assign responsibility for how those tools are actually deployed within the business process and toward the customer results,” Nack emphasizes.

AI systems don’t set-and-forget traditional software. “Continuous monitoring means finding ways to always check the output against the input that went to the system to ensure that the delegation is never really 100% reliable until we are absolutely certain of what’s happening there,” Nack explains. This ongoing oversight extends to employee training. People need to understand not just how to use AI tools, but when they should and shouldn’t use them. Clear guidelines help prevent the kind of unauthorized experimentation that can create compliance headaches later.

Focusing AI on Specialized Roles

As agentic AI becomes more common, Nack sees companies making a fundamental mistake about capabilities. “Many of us expect our AI agents to be jacks of all trades who do a little bit of everything but nothing really with high precision,” he observes. This expectation sets organizations up for disappointment. “If you are going to delegate an important process to an employee, a human being, you want them to be the best at that, right? And I think that consideration extends to agentic AI and AI processes.” The solution is building specialized AI agents that excel in specific areas rather than trying to create generalist systems that handle everything poorly.

Regulations are catching up to AI deployment, with new standards emerging regularly. Companies that take time now to build proper governance frameworks will find themselves ahead of organizations still treating AI as an experimental side project. The goal isn’t to slow down AI adoption, but to make it more strategic and effective.

Jacques Nack, Chief Executive Officer at JNN Group Inc., is a trailblazer in cloud security, compliance, and risk management. Read Jacques Nack’s full executive profile here.

Find Jacques Nack on LinkedIn. Visit Jacques’ website.

You May Also Like