The AI balancing act your company can't afford to fumble in 2026
There are eight key tenets your company can follow to build AI faster while keeping it safe and trustworthy.

iStock / Getty Images Plus / Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- AI responsibility and safety are top issues for 2026.
- The best safeguard is building AI in a sandbox.
- Keep AI development simple and open.
The author of the book The Lincoln Lawyer, Michael Connelly, has turned his attention to the issues behind unrestrained corporate artificial intelligence. His latest work of fiction, The Proving Ground, is about a lawyer who files a civil lawsuit against an AI company "whose chatbot told a sixteen-year-old boy that it was OK for him to kill his ex-girlfriend for her disloyalty."
Also: Your favorite AI tool barely scraped by this safety review - why that's a problem
The book describes the case, which "explores the mostly unregulated and exploding AI business and the lack of training guardrails."
While this is a work of fiction, and the case presented is extreme, it's an important reminder that AI can go off the ethical or logical rails in many ways -- either through bias, bad advice, or misdirection -- with repercussions. At the same time, at least one notable AI voice advises against going too far overboard with attempts to regulate AI, in the process slowing down innovation.
Balance is needed
As we reported in November, at least six in 10 companies (61%) in a PwC survey say responsible AI is actively integrated into their core operations and decision-making.
A balance needs to be struck between governance and speed, and this will be the challenge for professionals and their organizations in the year ahead.
Andrew Ng, founder of DeepLearning.AI and adjunct professor at Stanford University, says vetting all AI applications through a sandbox approach is the most effective way to maintain this balance between speed and responsibility.
Also: The AI leader's new balance: What changes (and what remains) in the age of algorithms
"A lot of the most responsible teams actually move really fast," he said in a recent industry keynote and follow-up panel discussion. "We test out software in sandbox safe environments to figure out what's wrong before we then let it out into the broader world."
At the same time, recent pushes toward responsible and governed AI -- both by governments and corporations themselves -- may actually be too overbearing, he said.
"A lot of businesses put in place protective mechanisms. Before you ship something, you need legal approval, marketing approval, brand review, privacy review, and GDPR compliance. An engineer needs to get five VPs to sign off before they do anything. Everything grinds to a halt," Ng said.
The best practice is "to move fast by preemptively creating sandboxes," he continued. In this scenario, "put in place a set of rules to say 'no shipping stuff externally under the company brand,' 'no sensitive information that can be leaked,' whatever. It's only tested on the company's own employees under NDA, with only a $100,000 budget in AI tokens. By creating sandboxes that are guaranteed safe, this can create a lot of room for product and engineering teams to run really fast and try things internally."
Once an AI application is determined to be safe and responsible, "then invest in the scalability, security, and reliability to take it to scale," Ng concluded.
Keep it simple
On the governance side, a keep-it-simple approach may help keep AI out in the clear and open.
"Since every team, including non-technical ones, is using AI for work now, it was important for us to set straightforward, simple rules," said Michael Krach, chief innovation officer at JobLeads. "Clarify where AI is allowed, where not, what company data it can use, and who needs to review high-impact decisions."
Also: Why complex reasoning models could make misbehaving AI easier to catch
"It's important that people believe AI systems are fair, transparent, and accountable," said Justin Salamon, partner with Radiant Product Development. "Trust begins with clarity: being open about how AI is used, where data comes from, and how decisions are made. It grows when leaders implement balanced human-in-the-loop decision making, ethical design, and rigorous testing for bias and accuracy."
Such trust stems from being explicit with employees about their company's intentions with AI. Be clear about ownership, Krach advised. "Every AI feature should have someone accountable for potential failure or success. Test and iterate, and once you feel confident, publish a plain-English AI charter so employees and customers know how AI is used and trust you on this matter."
The key tenets of responsible AI
What are the markers of a responsible AI approach that should be on the radar of executives and professionals in the year ahead?
Also: Want real AI ROI for business? It might finally happen in 2026 - here's why
The eight key tenets of responsible AI were recently posted by Dr. Khulood Almani, founder and CEO of HKB Tech:
- Anti-bias: Eliminate discrimination.
- Transparency and explainability: Make AI decisions clear, traceable, and understandable.
- Robustness and safety: Avoid harm, failure, and unintended actions.
- Accountability: Assign clear responsibility for AI decisions and behaviors.
- Privacy and data protection: Secure personal data.
- Societal impact: Consider long-term effects on communities and economies.
- Human-centric design: Prioritize human values in every interaction.
- Collaboration and multi-stakeholder engagement: Involve regulators, developers, and the public.
,