top of page

Artificial Intelligence (AI) Risks.

Updated: Apr 14

This website contains affiliate links, advertisements, sponsored content, or brand partnerships from which income is earned.

The future has not been written. There is no fate but what we make for ourselves.”

— John Connor

Artificial Intelligence (AI): AI is a technology that enables machines to perform tasks that usually require human intelligence, such as visual perception, speech recognition, and decision-making. Its implications on society include improved efficiency, automation of tasks, and enhanced problem-solving, but there are also several potential dangers associated with its development and deployment.

Some of the most significant dangers, or risks, of large-scale implementation of Artificial Intelligence (AI)


As AI systems become more advanced, they may be able to perform tasks that were previously done by humans, which could lead to significant job losses in certain sectors. There is no "may be able to", this is already happening. We've been seeing it unfold over the last few years and now, with applications like ChatGPT4, we are seeing wide-spread adoption of AI. Just recently, in the last few months, Buzzfeed announced it was firing 180 employees, and replacing the work they did with ChatGPT.

The trend of AI replacing human jobs has accelerated since 2022, with various industries adopting automation at a rapid pace. For example, in addition to the Buzzfeed layoffs, companies like Amazon, Walmart, and McDonald's have been implementing AI systems to handle tasks ranging from customer service to inventory management, leading to significant workforce reductions.

As AI technology continues to advance, it's likely that this trend will continue, impacting a wide range of professions.


Bias and Discrimination

We already know AI systems can be biased due to the data they are trained on, or the algorithms used to create them, leading to discrimination against certain groups. Remember Tay, Microsoft's AI chat bot? That was a fiasco.

The issue of bias in AI has become even more prominent, with several high-profile cases highlighting the dangers. For example, in 2023, Facebook faced backlash over allegations that its AI-powered ad delivery system discriminated against certain demographic groups. Similarly, Amazon's AI recruiting tool, also underscored the challenges of biased algorithms. These cases have spurred calls for greater oversight and regulation to ensure that AI systems are fair and unbiased.

Lack of Transparency

In my opinion, this is the most difficult aspect. Can't live with it, can't live without it. AI systems can be difficult to understand, making it difficult to determine how they are making decisions and whether they are acting ethically.

A Harvard Business Review article states,

"Transparency can help mitigate issues of fairness, discrimination, and trust — all of which have received increased attention. Apple’s new credit card business has been accused of sexist lending models, for example, while Amazon scrapped an AI tool for hiring after discovering it discriminated against women."

Great, we love transparency, but in the next paragraph, transparency is a risk.

While transparency remains a critical issue, efforts are being made to address it. The European Union's General Data Protection Regulation (GDPR) includes provisions for the "right to explanation," requiring companies to provide users with information about the logic behind automated decisions. Achieving full transparency in AI systems remains a complex challenge, especially as AI models become more sophisticated and opaque.

Security Risks

AI systems can be vulnerable to cyberattacks and could potentially be used to carry out malicious activities, such as cyber warfare or terrorism. There are too many scenarios to list here but I will provide a few:

1.) Compromising integrity of the AI decision-making process.

2.) Data poisoning

3.) Transparency measures.

4.) Lack of policy/certification for AI.

5.) Lack of audit capability.

'Nuff said. Security is a MAJOR issue with AI.

Security risks associated with AI have become more pronounced, as evidenced by the increasing frequency of AI-driven cyberattacks. In 2023, a ransomware attack targeted a major city's AI-powered traffic management system, causing widespread disruption. Such incidents highlight the critical need for robust cybersecurity measures to protect AI systems from malicious actors.


Control and Accountability

Okay, this is some Skynet stuff here, and I'm not talking about the NSA's surveillance program. That's another article, for another day. As AI systems become more advanced, it may become difficult to control them, and it may be unclear who is responsible if something goes wrong.

Can you imagine the shit show that would unfold in a crisis like this? The US has literally botched what seems like every single crisis response ever.

The Institute of Electrical and Electronics Engineers (IEEE) sums it up like this:

"It may be theoretically impossible for humans to control a super-intelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created."

Great, that's not scary at all.

The issue of control and accountability in AI has deepened too just in the last year, with growing concerns about the implications of AI systems becoming too autonomous. Efforts to address this challenge include the development of frameworks for AI governance and ethics, such as the IEEE's Ethically Aligned Design initiative. But, questions about who should be held accountable for AI-related failures and how to ensure human oversight remain unresolved. Ugh!

Existential Risks

Some experts warn that the development of super-intelligent AI could pose an existential risk to humanity if it becomes impossible to control, or if it decides to act against our interests. Uhm. What?

The concept of super-intelligent AI posing an existential risk to humanity has gained more attention, fueled by advances in AI research and the increasing complexity of AI systems.

While the likelihood of such a scenario remains uncertain, experts continue to debate the potential risks and safeguards needed to prevent AI from posing an existential threat.

More Articles from the web:

The large-scale implementation of AI presents significant risks across various fronts. Unemployment is already a reality in many sectors as AI systems replace human workers. Bias and discrimination remain major concerns, as highlighted by recent cases involving prominent tech companies. The lack of transparency in AI decision-making processes continues to be a challenge, despite efforts to improve accountability.

Security risks have become more pronounced, with AI systems vulnerable to cyberattacks and potential misuse. Control and accountability issues are also escalating, with questions arising about the ability to manage increasingly autonomous AI systems. The development of super-intelligent AI raises existential risks that require careful consideration and proactive measures to mitigate.

Addressing these challenges will require a multidisciplinary approach involving policymakers, technologists, ethicists, and the broader society to ensure that AI is developed and deployed responsibly and ethically.




bottom of page