Are you for or against regulating AI evolution?

Newsletter First Edition 

We recently asked our community of AI enthusiasts the question: “Are you for or against regulating AI evolution?” The results were overwhelmingly in favour of governance, but they also hint at something deeper.

Survey Highlights:

  • Over 200 participants from around the globe answered the survey.
  • More than 90% of respondents are actively working in AI development.
  • 81% voted in favour of AI governance.

While these results may not be surprising, it is notable that such a strong preference for governance comes from professionals actively engaged in AI development. This raises a bigger question: what drives those creating AI to seek governance over it?

I know there are different ways to interpret the results. Some might think that if people that are developing AI are voting for governance it’s because they’re afraid of the risks.

While governance is an important part of the conversation, I want to look at this from a different perspective—maybe asking a new question that the call for governance is hiding, as it’s what we’re trained to focus on.

Let’s think about what this strong support for governance might really mean and whether it’s pointing to something deeper that governance alone can’t solve.

A Call for Clarity

At this point, it’s clear that we lack clarity on AI—how it will integrate into our lives and what its broader impact will be. The call for governance, whether through governments, legal frameworks, or a higher regulatory body, fundamentally represents a demand for a clearer and more cohesive understanding of AI’s use and integration.

If AI professionals are advocating for governance, it highlights their recognition of AI’s rapid implementation across more areas and their awareness of the limitations companies face in addressing the broader implications of their AI use. While companies focus on specific tasks and goals for their AI systems, who ensures that these individual efforts align within the larger context—particularly for the people who will use AI as part of their everyday lives.

A Broader Perspective

Consider this: a study by McKinsey Global Institute estimates that by 2030, AI could contribute to creating 20 million to 50 million new jobs globally.

This serves as a testament to the extent of investment organisations are making—and will continue to make—in integrating AI into every aspect of our lives.

The overwhelming call for governance reflects our intuition—our collective understanding that something transformative is at play. What we lack is an orchestration process to ensure AI fits cohesively into our ecosystem, our lives, with a focus on how we use it in our daily lives.

Whether this comes through governance or other means, such as education systems stepping up to provide a better understanding of AI, this isn’t just about technology or information. And it’s not solely about the fear we often focus on when calling for governance. It’s about merging AI into our lives in a way that fosters harmony and accountability.

A Call to Reflect on Use, Not Development

The real focus should be on how we use AI, not on how we develop it. 

This distinction represents the balance between ensuring AI’s potential isn’t hindered by fear and empowering humanity to use it responsibly.

Empowering individuals with the right knowledge and creating thoughtful concepts around AI’s role in our lives is key.

This isn’t about treating AI as another piece of technology; it’s about recognising its potential to develop a form of autonomy. Regulation alone isn’t enough. We, as users, must take accountability for how we integrate AI into our daily lives.

Education, understanding, and clarity are what we’re truly seeking when we are asking for governance. It’s the guidance to use AI in ways that support us while aligning with our values and needs.

My Thoughts:

I hope this has encouraged you to consider a new perspective. While the survey showed overwhelming support for governance and regulation, let’s take a moment to reflect on this: no matter how many regulations are put in place or who enforces them, they won’t replace the accountability we, as users, must take.

As humans interacting with AI daily, it’s essential that we are well-educated and develop the right concepts to ensure AI is used in a way that aligns with our values, meets our needs, and most importantly, preserves our individuality and humanity—both personally and collectively.

This is a call to action for all of us. Rules and governance alone won’t erase the fears or uncertainties we might feel about AI. Instead, it’s through understanding, accountability, and conscious use that we can shape AI to truly serve us in meaningful ways.

Subscribe to my newsletter for a new perspective powered by human diverse thoughts.

Written by

Sarit Lahav

I’m Sarit Lahav, a Strategy and Transformation consultant with a focus on developing impactful AI strategies that merge business insight and technological expertise. Leveraging my extensive experience as a co-founder and former CEO of a global high-tech firm, where I served over 5000 clients and spearheaded innovative technology solutions, I advocate for treating AI as a true team member. My goal is to harness AI to deliver tangible business results, emphasizing its role in augmenting rather than substituting the human touch. Let’s connect to redefine the synergy between AI and human collaboration for your business.

More articles