Artificial Intelligence (AI) is rapidly transforming many aspects of our society, but its impact is not always positive. One of the areas where AI is finding controversial applications is organized crime. A recent study from the EL PACCTO 2.0 program highlighted how the adoption of AI technologies by criminal groups is already a reality, especially in Latin America, the Caribbean, and the European Union. This article explores how AI is exploited by organized crime, the main crimes committed using AI tools, and how justice and security institutions are attempting to respond to these challenges.
Organized Crime and Artificial Intelligence: A Dangerous Alliance
Organized crime and artificial intelligence have strengthened illegal activities such as human trafficking, ransomware, banking fraud, and sexual exploitation. A particularly alarming aspect is the ease with which these technologies can be used even by individuals with limited technical expertise. Through automation, the scope of criminal operations can be expanded while reducing the likelihood of detection.
One illustrative example is the use of AI-controlled bots to launch large-scale attacks such as Distributed Denial of Service (DDoS) or to distribute malware. Recent reports indicate that these bots are increasingly sophisticated and capable of identifying and exploiting vulnerabilities in real time, enhancing efficiency and reducing the time required to carry out an attack. According to the EL PACCTO program, between 2021 and 2023, there was a 284% increase in identity theft cases in South Africa, facilitated by AI use in creating fake identities and financial fraud. These numbers demonstrate how AI not only increases the effectiveness of criminal actions but also makes it extremely difficult for authorities to counteract.
In Latin America, criminal groups have begun using AI-controlled drones to transport drugs and carry out physical attacks against rival groups or law enforcement. For instance, in 2018, a drone attack targeted the Secretary of Public Security in Baja California, Mexico. Equipped with explosive devices, these drones represent a new threat that combines the autonomy and precision of AI technologies with the lethality of weapons. Furthermore, these devices have been used to monitor and surveil law enforcement activities, further complicating authorities' operations.
AI is also used to identify potential victims of human trafficking through digital platforms such as social media and dating sites. This enables criminals to precisely target vulnerable individuals in economically or socially difficult situations. A recent EL PACCTO report noted that online recruitment and AI manipulation of victims are now standard practices in human trafficking networks in Europe and Latin America. Machine learning techniques are employed to analyze victim profiles and optimize persuasion tactics, increasing the success of recruitment operations.
Another worrying aspect is the use of deepfake technologies. Beyond creating fake videos and audio to impersonate others, these technologies are exploited for sophisticated financial fraud. In 2024, a scam in England involved deepfakes to impersonate a CFO, convincing an employee to transfer over $25 million. In many cases, deepfakes are used to forge identities and commit banking fraud or other fraudulent activities, making it increasingly difficult for victims to distinguish between reality and fiction.
The Response of Justice and Security Institutions
Justice and security institutions are trying to respond to the challenges posed by the criminal use of AI by adopting AI technologies themselves for crime prevention and counteraction. Among the most common applications are:
• Predictive analysis and surveillance: AI is used to analyze large amounts of data and predict where crimes might occur, enabling law enforcement to optimize resource allocation. A significant example is the use of predictive algorithms to identify high-risk areas and improve patrolling. In Colombia, a project developed in collaboration with UNESCO improved police resource distribution based on historical and real-time crime data. However, the use of predictive technologies raises ethical issues and concerns related to human rights, such as discrimination and excessive surveillance of vulnerable communities.
• Judicial case management: In Europe, several countries have adopted AI-based systems to manage judicial cases more efficiently, automating repetitive tasks and facilitating access to legal information. In Germany, the OLGA system (Online Criminal Proceedings Register for Organized Crime and Money Laundering) centralizes and manages organized crime case data, improving information sharing and response times. Similarly, in Latin America, some jurisdictions are attempting to digitize and modernize their judicial infrastructures through AI integration, though implementation is hindered by limited resources and fragile infrastructures.
• Facial recognition: AI-based facial recognition technologies are used to identify suspects or victims, though there are significant ethical issues related to privacy and the possibility of errors. In Latin America, facial recognition systems have been adopted in countries like Brazil and Mexico, where they are used to monitor public spaces and ensure security. However, there have been numerous cases of misuse, such as unauthorized surveillance of activists and journalists, raising concerns about human rights protection.
• International cooperation and blockchain: Another important element in the response to organized crime is international cooperation. Projects like INSPECTr, funded by the European Union, use blockchain technology to ensure the integrity of collected evidence and data traceability across jurisdictions. This technology helps reduce the risk of evidence tampering and facilitates secure information sharing among authorities.
• Automatic translation and interpretation: In the context of international cooperation, language barriers represent a significant challenge. Projects like MARCELL, within the Connecting Europe Facility (CEF) program, aim to improve the quality of automatic translations in legal contexts, enabling smoother communication during transnational proceedings. This is particularly relevant in cases where cooperation between different countries is essential to combat organized crime.
Despite these efforts, many challenges remain in using AI for justice and security activities. One major issue concern protecting human rights and managing algorithmic biases that could lead to discrimination. The need for rigorous regulations to ensure ethical and responsible use of these technologies is more evident than ever, especially in the introduction of tools like facial recognition and predictive analysis, which can have a direct impact on citizens' lives.
Recommendations and Guidelines from International Organizations
In recent years, there has been a growing interest in the ethical, responsible, and fair development of artificial intelligence technologies. In this field, many international institutions have worked diligently to define guidelines, principles, and recommendations. A significant reference at the European level is the "Guidelines for Trustworthy AI," published in 2019 by the High-Level Expert Group on Artificial Intelligence (HLEG), a group of experts established by the European Commission to outline a shared vision ensuring respect for human rights, democratic values, and deeply rooted ethical principles.
The guidelines highlighted the importance of technical robustness capable of minimizing risks and vulnerabilities, the centrality of safety in technological development, and transparency, essential for enabling users and regulators to understand the logic underlying automated decision-making processes. Additionally, they emphasized the responsibility of all actors involved in the AI lifecycle and the need to prevent any form of discrimination to avoid technology becoming a tool for injustice or inequality.
Attention to this issue has extended well beyond the European Union, reaching the United Nations and its decision-making bodies. In 2024, the UN General Assembly adopted a General Resolution (78/L.49), emphasizing the centrality of human rights and urging member states not to use AI in ways contrary to international law, committing them to prevent scenarios where these technologies could be exploited to harm fundamental freedoms or target already vulnerable groups. This document lays the foundation for international cooperation aimed at protecting rights at every stage of the design, development, distribution, and use of automated systems, inspiring national policies that never overlook the ethical dimension of technology.
Another tool provided to the international community is the Toolkit for Responsible Artificial Intelligence for law enforcement, developed by the United Nations Interregional Crime and Justice Research Institute (UNICRI) and INTERPOL, with support from the European Union. This Toolkit offers practical guidance for security agencies in understanding and employing AI systems, providing recommendations on how to use technology consistently with human rights, ethical principles, and fundamental values underpinning contemporary societies. Through this initiative, police forces and judicial institutions in various countries can acquire theoretical and operational knowledge, adopt transparent procedures, assess the impact of technologies on the ground, and enhance crime prevention and control activities without compromising principles of accountability and legitimacy.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has played a crucial role in this global effort by creating a Global Observatory on AI Ethics and Governance, designed as a permanent monitoring tool for ethical practices adopted in different national settings. In addition to providing guidelines for evaluating the impact of AI technologies, UNESCO has promoted the dissemination of best practices and established structures like the AI Ethics and Governance Lab, intended to gather knowledge, experiences, studies, and research to foster the continuous evolution of regulations and supervisory mechanisms. Already in 2021, UNESCO published the "Recommendation on the Ethics of Artificial Intelligence," a landmark document summarizing values and principles useful for guiding governments and policymakers in creating regulatory frameworks that combine technological innovation with respect for human dignity, cultural diversity, and environmental protection.
Another prominent body is the Organization for Economic Co-operation and Development (OECD), which outlined a specific recommendation on AI, promoting the vision of technology usage aligned with democratic principles, human rights protection, and the need to maintain human oversight where automated systems may exhibit undesired behavior. The OECD updated its guidelines in 2024, broadening the definition of "AI system" and addressing safety issues more specifically, ensuring that human intervention remains possible and secure even in emergency scenarios, thus preventing potential abuses or malfunctions.
Overall, the framework emerging from the work of international organizations, specialized agencies, and globally funded projects is one of a constantly evolving ecosystem aimed at creating shared standards for the governance, regulation, and ethics of artificial intelligence. This ecosystem, while recognizing the innovative potential of these technologies, seeks to maintain human oversight, bind them to specific ethical principles, and place them within a sustainable, inclusive perspective attentive to social, environmental, and economic dimensions. It is an ongoing process, a collective effort involving governments, supranational institutions, experts, scientific communities, civil society, and the private sector, aiming to ensure that artificial intelligence becomes a resource benefiting humanity, consistently oriented towards respect for rights, freedoms, safety, and the common good.
International Legal Cooperation
In recent years, international legal cooperation has gained increasing importance in combating transnational crime, prompting the European Union and its partners to finance and promote innovative projects to support security agencies, law enforcement, and judicial institutions. Projects like EXCAPE have inaugurated a new generation of tools based on artificial intelligence and advanced data analysis, aimed at identifying and predicting organized crime dynamics on a global scale. By integrating diverse information sources, ranging from social networks to financial and criminal records, EXCAPE seeks to provide a comprehensive view of illicit activities, fostering more effective coordination among various jurisdictions and accelerating interventions against particularly complex and articulated phenomena.
In parallel, the ROXANNE project has demonstrated how voice recognition, combined with advanced social network analysis techniques, can enhance law enforcement's ability to identify and dismantle organized criminal groups. By processing intercepted communications, ROXANNE has shown how it is possible to connect suspected individuals, revealing links, hierarchies, and internal dynamics often very difficult to uncover with traditional methods. In this way, it has been possible to provide more solid evidence to the competent authorities, improving the quality and effectiveness of investigations.
Another piece of this mosaic is represented by the TRACE project, which specifically aims to combat human trafficking using machine learning algorithms capable of detecting online recruitment patterns and mapping routes used by traffickers. This approach makes it possible to systematically analyze communications and movements, revealing networks of accomplices and geographical trafficking routes that, once identified, can be disrupted and dismantled. Understanding these dynamics has provided an important practical tool for helping investigators intervene promptly and effectively, protecting vulnerable individuals and discouraging the proliferation of such crimes.
Equally relevant has been the experience gained with the INSPECTr project. Here, the focus has been on creating a common technological infrastructure, a sort of shared intelligence platform that, leveraging big data analysis, machine learning, and blockchain technology, has facilitated the circulation of digital and forensic information across different jurisdictions. INSPECTr has enabled faster and smarter management of investigations into complex crimes, reducing costs and overcoming obstacles related to fragmented databases or heterogeneous procedures. This greater fluidity in information exchange and operational coordination has strengthened cross-border collaboration, raising the level of trust among the parties involved and significantly improving the effectiveness of the international response to rapidly evolving criminal threats.
A different but equally crucial approach to legal cooperation has been implemented by the AVIDICUS project, which investigated the use of video-mediated interpretation in international criminal proceedings. Divided into three phases, this journey allowed for the observation of the impact of videoconferencing and remote interpretation on the quality of communication between judges, lawyers, defendants, witnesses, and interpreters. From improving mutual understanding to providing specific training for the professionals involved, AVIDICUS has shown how communication technologies can contribute to a more efficient and inclusive judicial system, reducing linguistic and cultural barriers that previously slowed or compromised the fairness of trials. Harmonizing procedures and easier access to interpretation services have also facilitated information sharing, enhancing cooperation between states and promoting effective administration of justice.
Overall, the combination of these projects highlights an integrated vision of international legal cooperation, where cutting-edge technologies, data analysis, mediated interpretation, artificial intelligence, and blockchain come together to make crime control more timely, coordinated, and incisive. The common goal is to establish a collaborative ecosystem where regulatory, linguistic, or organizational differences no longer constitute insurmountable obstacles but rather challenges to be overcome through knowledge sharing, technological innovation, and building mutual trust. These initiatives demonstrate the international community's commitment to ensuring a future where legal cooperation can constantly adapt to the changing nature of crime, protecting citizens, fundamental values, and the stability of societies in the long term.
Regional and National Regulatory Initiatives
In recent years, at regional and national levels, there has been an intensification of efforts to integrate international ethical recommendations and principles into strategies for adopting artificial intelligence. In particular, Latin America and the Caribbean are progressively incorporating these guidelines, with numerous countries developing their own national strategies to promote the responsible use of AI technologies in various economic and social sectors. Argentina, Brazil, Chile, Colombia, Mexico, Peru, and Uruguay, for example, have already defined or are defining policy guidelines reflecting the awareness of the need to balance innovation opportunities with safeguarding human rights, protecting democracy, ensuring process transparency, and equitable access to the benefits of AI. Costa Rica, through its Ministry of Science, Innovation, Technology, and Telecommunications (MICITT), has made public the 2024-2027 National AI Strategy, indicating that alongside technological progress, maintaining fundamental values such as inclusion, responsibility, and sustainability is essential.
These initiatives are not limited to mechanically replicating recommendations from entities such as the OECD and UNESCO but tend to root them in local dynamics and priorities. The search for a balance between economic needs, infrastructure availability, trained human capital, and attention to digital rights has led, for instance, to the creation of the Latin American Artificial Intelligence Index (ILAI), developed by the Economic Commission for Latin America and the Caribbean (ECLAC) in collaboration with Chile's National Center for Artificial Intelligence (CENAI) and supported by the Inter-American Development Bank (IDB). This assessment tool not only highlights the capacity of individual countries to adopt and integrate AI technologies but also provides a comparative framework on the quality of introduced policies and resources. The ILAI results indicate that Chile, Brazil, and Uruguay stand out in terms of investments in research, training, and innovation, positioning themselves as regional models in creating more mature and aware technological ecosystems.
Despite progress, there are still areas where regulations and strategies appear less developed, particularly in the justice sector. Employing AI systems in the judicial context could offer significant advantages in terms of efficiency and workflow management but raises critical issues related to protecting individual rights, fairness in automated decisions, algorithm transparency, and the accountability of involved actors. So far, only a limited number of countries have begun developing dedicated AI strategies in the justice sector, and the guidelines developed by CEPEJ and UNESCO, as well as the experiences of other countries, serve as essential reference points to avoid automation introducing new forms of discrimination or undermining trust in legal systems.
The regulation of AI has become particularly prominent in the European Union, where the Artificial Intelligence Regulation (REAI) approved in 2024 establishes a regulatory framework identifying different risk levels associated with AI technology use, imposing stringent obligations in critical sectors and banning certain applications deemed incompatible with the EU's fundamental values. This approach is based on preventing potential abuses, such as prohibiting certain remote biometric identification systems in public spaces, and strengthening provider accountability, requiring them to meet specific requirements before bringing their systems to market.
Alongside the REAI, the Council of Europe's Framework Convention on AI, open for signature also in 2024, adds another layer to protecting human rights, democracy, and the rule of law in the digital age. This binding international treaty emphasizes the importance of adopting legislative, policy, and supervisory measures to ensure the use of AI aligns with the highest standards of transparency, safety, and accountability, calling on states for continuous cooperation in harmonizing national policies.
Overall, the regional and national landscape reveals a complexity where countries with varying development levels, political priorities, and financial resources are trying to build a regulatory framework consistent with international principles and ethical recommendations. On the one hand, integrated strategies connecting key economic sectors, training pathways, technological infrastructures, and governance choices are maturing; on the other hand, there remains a pressing need to strengthen regulation in specific areas such as justice and the protection of vulnerable individuals, as well as to consolidate cooperation among states and international organizations. The outcome of these intersecting efforts should be greater confidence in humanity's ability to govern and leverage artificial intelligence so that technology, rather than undermining rights and values, transparently and responsibly contributes to economic growth, social progress, and sustainable development.
Ethical Challenges and Human Rights
The use of AI-based tools in justice and security activities raises numerous ethical questions. Among the main risks are algorithmic biases, which can lead to significant discrimination. For example, facial recognition systems often make errors in identifying individuals belonging to ethnic minorities, risking the impartiality of investigations. A study conducted by the AI Now Institute found that women of color have significantly higher error rates compared to Caucasian men, thus increasing the danger of injustices and discrimination. To address these issues, the European Union has issued Recommendation CM/Rec(2020)1, prescribing periodic assessments of the impact that AI systems used in criminal investigations have on human rights and privacy.
Moreover, using AI for predictive surveillance raises concerns about citizens' privacy and potential violations of fundamental human rights. Predictive surveillance tools could lead to mass profiling of individuals, where people from certain social or demographic categories are identified as potential threats without concrete evidence. Colombia's Constitutional Court recently issued an opinion on using generative AI tools, such as ChatGPT, in judicial decisions, warning about risks to citizens' fundamental rights and emphasizing the importance of ensuring the final decision remains in the hands of human judges.
Additionally, adopting AI in justice without adequate training for judicial personnel could amplify risks of bias and incorrect decisions. For this reason, various international organizations, such as UNESCO and the OECD, are promoting training programs to ensure judicial personnel have the necessary skills to understand and responsibly use AI technologies. For example, the fAIr LAC+ program by the Inter-American Development Bank offers ethical guidelines and tools to evaluate AI use in the public sector, specifically targeting contexts in Latin America and the Caribbean.
Another issue is the lack of representation of women in AI algorithm development. According to a UNESCO study, only 22% of AI professionals are women, which can lead to a lack of gender perspectives in technology development. This imbalance not only perpetuates existing biases but also limits the ability to create inclusive solutions that protect all sectors of society. UNESCO has launched initiatives like the Global AI Ethics and Governance Observatory to monitor the adoption of ethical guidelines and promote gender equality in AI technology development.
In conclusion, addressing the ethical challenges and human rights issues posed by AI adoption in justice and security requires a multilateral commitment. AI technologies can offer powerful tools for crime prevention and control, but their implementation must be accompanied by strict regulation, adequate training, and oversight mechanisms to ensure fair and responsible use. Without these measures, there is a risk that AI technologies will reinforce inequalities and perpetuate injustices rather than contribute to a safer and fairer society.
Conclusions
The use of artificial intelligence by organized crime represents a dark reflection of our technological aspirations, revealing one of the most profound dichotomies of innovation: the ability to amplify both human progress and its deviations. The real question is not so much how to limit the illicit uses of AI but rather understanding why systemic, cultural, and institutional vulnerabilities in our society continue to create fertile ground for such abuses. This analysis prompts a strategic reflection on governance, political priorities, and dominant economic models, urging businesses and institutions to rethink their roles in a complex and interconnected global scenario.
The increasing use of AI for illicit activities such as drug trafficking, human trafficking, and financial fraud cannot be viewed solely as a "technological problem" to be solved with more security tools. It is rather an inevitable consequence of a development model that prioritizes innovation speed over ethical and social sustainability. The democratization of access to AI technologies, while positive in many respects, is breaking down technical barriers that once limited the adoption of sophisticated tools by criminals. However, the deeper issue lies in the structural gap between the pace of technological innovation and the normative and institutional capacity to regulate it in real-time.
The response strategy must go beyond implementing counter technologies such as facial recognition or predictive analysis and address a systemic revision of root causes. This includes institutional fragility in some regions, disparities in access to educational resources, and the inability to build resilient governance infrastructures. Organized crime exploits these weaknesses not only to adopt AI but to fill the voids left by inefficient or corrupt state institutions, strengthening its social and economic control.
An often-overlooked element is the private sector's role as a crucial actor in preventing abuses related to AI. Technology companies, traditionally seen as solution providers, are also the primary producers of technologies that end up being used illicitly. The urgency to monetize innovations often leads to relaxed safety standards, creating products that, while market-accessible, lack adequate safeguards against misuse. This raises an ethical dilemma for companies: to what extent are they willing to sacrifice economic growth to ensure greater social security? It is here that the concept of "shared accountability" should be redefined, requiring companies to take direct responsibility not only for the legitimate use of their technologies but also for the side effects of their applications.
The emergence of technologies like deepfakes and autonomous drones introduces another dimension of risk: AI's ability to undermine the very fabric of social trust. When the line between reality and fiction dissolves, the foundations that sustain cooperation and respect for the law begin to erode. This effect is not limited to those who directly suffer fraud or attacks but has deeper implications for the stability of financial markets, trust in public institutions, and the collective sense of security. In this scenario, companies are called to reflect on their role as guarantors of public trust, adopting stricter ethical principles in innovation processes.
Another key point is the paradox of predictive surveillance, which promises to prevent crime but risks fueling inequalities and discrimination. The adoption of predictive tools based on historical data can perpetuate existing biases, worsening the marginalization of already vulnerable communities. This raises a fundamental question: who has the right to define what constitutes "normality" or "deviation" in the context of security? Companies, particularly those operating in the technology sector, must consider their contribution to perpetuating these biases and develop algorithms that reflect authentic diversity rather than perpetuating exclusion models.
Finally, adopting AI in crime control should not become a technological arms race between criminals and institutions but rather an opportunity to rethink the foundations of the global security system. In a world where digital technologies know no borders, security must be redefined as a global public good, requiring deeper and more structured international cooperation. In this sense, businesses play an essential role in facilitating dialogue between governments, civil society, and academic institutions, contributing to the construction of an ethical and regulatory framework that is not merely reactive but proactive and long-term-oriented.
Artificial intelligence, therefore, is not just a tool but a mirror reflecting the priorities and fragilities of our societies. The way we choose to address its adoption by organized crime will not only determine the future of security but also shape our relationship with innovation, justice, and ethics.
Source: https://elpaccto.eu/en/
Comments