Last week, I had the privilege of speaking on a panel exploring how businesses must evolve their thinking in an era of AI dominance
I wanted to share, the panel questions as well as my answers
Thank you to the excellent panel moderator, Annabel Gillard, and my panel members and team players, Alex Farrell, and Sarah Zheng
And thank you to Dinis Guarda and Pallavi Singal for inviting me to the Businessabc AI Global Summit
Here are the questions and my answers
1. What does responsible business mean to you and what’s the relevance for AI companies?
Responsible business means acting in ways that create long-term value for all stakeholders, not just shareholders
For AI companies, that means building with transparency, fairness, and inclusivity baked into the model
AI doesn’t just mirror our intent, it amplifies it
So responsibility here means being intentional about ethical data sourcing, model bias, explainability, and human impact from day one
2. Can business prioritise both responsibility and growth? How do you find the right balance?
Absolutely. Responsibility isn’t a brake on growth, it’s a growth multiplier
Businesses that integrate ethical AI practices and data privacy protections from the start gain trust faster, differentiate better, and avoid reputational risk
The balance comes from treating responsibility as a strategic pillar, not just a compliance checkbox
It requires cross-functional ownership, not just leaving it to legal or IT
3. Are there any advantages to being an AI-first company when it comes to business responsibility?
Yes, being AI-first offers a greenfield opportunity to embed responsible practices into the foundation, from data governance and bias mitigation to automated compliance checks
Traditional companies often retrofit responsibility
AI-first firms can design it in, building more resilient, human-aligned systems
They can also lead the way in setting industry norms and standards
4. What issues do you see with regard to business resilience — new threats and new protections?
We’re facing a new kind of fragility
AI-enabled phishing, deepfakes, and synthetic identity fraud are emerging fast
But the biggest threat? Overconfidence
Businesses think cybersecurity is a tech problem, but it’s a people and process problem too
The future of resilience lies in adaptive identity protection, continuous model monitoring, and upskilling staff on AI literacy
Human error is still the weakest link and education is still our best defense
5. How does AI affect working people’s potential, especially regarding issues like forced or child labour?
AI can liberate human potential, automating drudgery, augmenting judgment, and giving people space for more creative, empathetic work
But there’s also a darker edge, where opaque supply chains and AI-enabled productivity metrics could mask exploitative labor or automate away dignity
Responsible businesses use AI to shine light on unethical practices, not to hide them
Transparency in supply chains and a commitment to ethical auditing, are essential
6. What’s your view on AI ethics and its role in addressing these issues or improving governance? Any examples?
AI ethics can’t just live in a slide deck
It must be practical, measurable, and enforced
Governance means asking: Who is accountable when AI fails? Who gets impacted?
One positive example is Microsoft’s Responsible AI Standard, it integrates engineering processes with ethical risk reviews
A poor example? Meta’s repeated rollout of models that clearly lacked bias and harm mitigation, showing what happens when speed trumps scrutiny
7. Is cybersecurity a responsible business issue? How should business think about it?
Yes, cybersecurity is now a core part of ESG
If you're not protecting your customers’ data, you're not a responsible business
It’s not just IT’s problem, it’s a leadership responsibility
Think of cybersecurity as digital trust infrastructure, just like physical safety standards in manufacturing
Boards need to ask hard questions, invest in scenario planning, and embed a security mindset in culture, not just technology
8. One suggestion to help businesses boost their cybersecurity or responsibility?
Start with storytelling
If people don’t understand why responsibility and cybersecurity matter, they won’t care
Train your teams not just in tools, but in narratives: real-world examples of breaches, harms, or ethical failures
Bring humanity into your cybersecurity strategy
Technology protects systems, but stories protect cultures
