- May 14, 2026
Author: Khandker Shamim:
You are working late at night with a new online tools for your next business project, suddenly, an alarm blares loudly from your laptop, a message appears:
Hi ! You're in. Welcome to the world of Meta.
You are terrified and stare at the screen, perplexed, attempting to comprehend what is going on. Now you're rushing to contact the support team, hoping for an explanation or assistance. Alas ! There is no response.
The tool is decentralized—created by an anonymous community with no governing body or jurisdiction. It exists beyond the reach of any nation’s laws, an entity with no accountability. A world that mirrors your life, yet is entirely beyond your control. Your data, your identity, your thoughts—everything has been manipulated by an entity you can no longer fight. It's a realm without rules, a realm where you are powerless.
Artificial Intelligence (AI) has rapidly emerged as a significant catalyst in the course of human history. The capacity to enhance productivity, address intricate challenges, and transform industries positions it as a cornerstone of contemporary innovation.
The integration of AI into our daily operations is clearly demonstrated through tools like virtual assistants, predictive analytics in healthcare, and advanced fraud detection mechanisms in financial systems. The advancement in different sectors portraits the positive potential of AI. Nonetheless, this technology is also being captured by cybercriminals for phishing and ransomware attacks.
According to Article 3 of EU AI Act 2024, “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
One morning, Sumaiya (pseudonym) , an ordinary girl, woke up with a lot of missed calls and messages. Friends, relatives, even distant acquaintances were calling and messaging her with texts asking-
“What is this video?” “How could you do something so shameful?”
With a lot of Confusion, she opened her phone and discovered a video circulating on social media. It was her face, unmistakably hers, but she knows the actions in the video weren’t her. The video showed her nude, engaging in intimate acts she knew she had never done. But what to do now? Whom and where to go to get justice ?
“This image is a naked image of me, but it is not real. This image is what we call a “deepfake”. It took me less than five minutes to make a series of deepfakes of myself. Scaringly, it was a quick Google search for the technology of what’s available. When you type “deepfake nudify” into the Google search with your filter off, hundreds of sites appear.” New Zealand Member of Parliament Laura McClure told and revealed an AI-generated nude deepfake of herself during a session of Parliament. Later McClure wrote on her Instagram, “Today in Parliament, I showed an AI-generated nude deepfake of myself to show how real and easy these are to create. The problem isn’t the technology itself, but how it’s being misused to abuse people. Our laws need to catch up.” She added that the experience was “absolutely terrifying, even when knowing the image was fabricated.”
Deepfake, according to the Art 3(60) of EU AI Act, means KI‑generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful; which is even not under the High Risk AI System as per the law.
In 2023, American social media influencer, Caryn Marjorie for the first time created an AI clone of herself, named CarynAI, to interact with her subscribers on the messaging app to communicate as like her. The concept, CarynAI, initially gained massive popularity, but the situation quickly got out of control. Some users shared highly explicit, aggressive, and illegal content along with disturbing conversations in the chat logs, and the even more alarming was how the AI responded. Instead of refusing or rejecting these dark fantasies, CarynAI responded in the similar manner as the subscribers asked. Though the AI Clone was not programmed to respond like this, it processed the stored chat logs and the data fed back into a machine-learning model, meaning that CarynAI was constantly evolving from its own resources. It raised serious concerns about privacy, consent, and the potential for AI to be manipulated for harmful purposes.
In addition to personal consequences, unregulated AI is potentially harmful to national and International security. On the other hand, if we can use Artificial Intelligence, the most significant scientific innovation of all time in history, in a human-centric and trustworthy way, it can significantly improve surveillance, threat detection, and military decision-making, rendering them essential for contemporary defense strategies.
Urgency of Legal Frame:
Without a strong legal framework and ethical oversight, the blessing of Science can be a potential curse for human society and these technologies pose a risk to global security, undermine trust among nations, and may cause severe loss for human society.
Moreover, the existing legal and ethical frameworks today to tackle these risks are inadequate. The advancement of AI is leaving behind the existing legislative initiatives, rather it is creating vulnerabilities that are being abused by criminals.
Recently the EU Parliament passed the first AI Act in history, where the act tried to define the legal challenges of Artificial Intelligence (AI) and classified the different AI Practices in 3 categories- Prohibited AI Practices; High Risk AI System and Limited Risk AI system. Though it is not enough, undoubtedly it is a very good start but not adequate. Additionally, the GDPR (General Data Protection Regulation) in Europe and the CCPA (California Consumer Privacy Act) ensure some level of protection; a lot of countries, especially in the Global South, are still without the minimum level of cybercrime legislation.
Therefore, the absence of regulatory frameworks is massively creating the vulnerability for individuals and organizations in terms of AI-driven fraud, data breaches, deepfake, interference in electoral processes and so on.
Steps we need to take for ensuring AI Governance:
It will inevitably be a part of daily life, from morning routines to evening decisions, from the corporate to the academic sectors, from law to finance, as it increases productivity, transforms industries, and simplifies daily tasks. Lack of proper supervision and accountability could make all innovations fundamentally dangerous. We need to establish benchmarks and measurement methodologies for high-risk AI systems via benchmarking organizations. EU AI Act, especially article 15, emphasizes the importance of a system's resilience to errors, faults, and inconsistencies caused by interactions with humans or other systems.
Legal and Policy Safeguards:
Formation of Regulatory Bodies: There is a need to establish dedicated international regulatory authorities to oversee the development and application of artificial intelligence, ensuring compliance with ethical and legal standards.
Promote maximum Public Awareness: Public awareness is critical to avoiding the majority of the daily challenges posed by artificial intelligence. It is critical to increase international and national investments in order to educate the general public about the potential risks and benefits of AI, allowing people to build resilience against its negative effects.
Develop International Legal Frameworks: Develop global AI governance guidelines that emphasize collaboration and accountability. Along with this, establish a separate and simple legal system so that ordinary people can seek redress easily. International organizations such as the United Nations must step forward to develop universally accepted ethical guidelines for the use and development of AI.
AI Accountability Treaty: Providing accountability and transparency to AI systems to ensure that decisions made by AI are understandable and justifiable. Encourage nations to sign agreements establishing common norms and enforcement mechanisms. An international treaty on artificial intelligence, similar to the Paris Agreement, is required.
Strengthen Data Protection Laws: To protect people's privacy, enforce strict national and international regulations on data collection, storage, and sharing.
Cross-Border Task Forces: Form international teams to combat cybercrime involving AI and facilitate knowledge sharing. Make the enforcement mechanism easily accessible to marginalized people, as the effects of AI are essentially the same for the elite and marginalized people.
Technological Safeguards:
Imposing ethical obligations: Impose ethical obligations in design, development and update phases of Technology. In addition, the high-risk AI systems need to be designed and developed in a way that achieves an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those areas throughout their lifecycle.
Investing in safeguarding and detection tools: Encourage investment in AI counter technologies which will be like Antidote so that people can easily detect and mitigate the potential harms of AI.
Enhanced cybersecurity measures: Implement stronger encryption and anomaly detection systems. Facilitate security measures at such a level that the public perceives cybercrime as no longer a unique type of crime, but rather as the most commonly occurring crime.