Just AI Media’s 2024 Responsible AI Review
This year has been action-packed with AI moments, some of which were great, and others that were less than stellar. Here’s our round-up of the top AI moments across three categories - Iconic AI Innovations, Responsible AI Blunders, and Responsible AI Wins for Humanity. If you think we missed one, drop a comment and let us know.
Iconic AI Innovations of 2024
I dug back into the archives of the Just AI newsletter and videos to identify some of the best innovations of 2024 and I was shocked to realize how quickly things have changed. Multi-modal AI wasn’t even commercial in early 2024, and now it’s the standard. We chose these innovations based on their level of breakthrough, in the field of AI, proof of utility (is it real?), and its benefit for humanity. Here’s Just AI’s list of the top 5 most Iconic AI Innovations of 2024.
5. Deep Reasoning Models - OpenAI
Deep reasoning models made a dramatic entrance in 2024 with OpenAI’s much anticipated “strawberry” model. Companies clambered to get their hands on the technology, which goes beyond finding, restructuring and re-sharing information. The model practically considers a step-by-step approach to solving a problem, and reasons its way through it before sharing a response or answer. It’s iconic because models of this type are considered a next major step in using AI to solve some of the world’s most challenging problems.
4. African Language LLM - Orange + Meta + OpenAI
Orange + Meta + OpenAI created an LLM that’s specifically meant to help identify and translate African languages. It’s iconic because it opens the door for more inclusive large language models to benefit a continent with over 3,000 unique, known languages.
3. AI & Drone Deforestation Solution - MORFO
MORFO, a Brazilian company, is using a drone/AI combo to combat deforestation. The drone flies over the landscape, the AI maps the deforestation and determines the amount of seeds needed, and then the drone re-seeds the area. It’s iconic because it’s 100x faster than humans, more cost effective, and safer.
2. Breast Cancer Stage Identification AI - MIT & ETH Zurich
MIT & ETH Zurich created AI breast cancer imaging that can identify the stage of a specific kind of breast cancer. What makes this iconic is how inexpensive it is to create and use the technology, as well as to capture, scan and store the images. The less expensive medical technology is to create and maintain, the higher the likelihood that it will benefit more people.
1. Practical AI Governance in Service of Humanity - Credo AI
Credo AI has been making AI governance easier for companies for years - I like to tell people that they were “there first.” They were the boots on the ground doing something about responsible AI when everyone else was just talking about it. When the EU AI Act entered into force, they were more than ready to help companies adhere to the Act, and they continue to adjust their offering with their mission in mind: to ensure that AI is always in service of humanity. What makes this iconic is Credo AI’s outsized impact on helping companies prioritize humanity as they adopt and use AI. They’ve had a banner year, and 2025 is looking great for Credo.
Responsible AI Woes of 2024
The generative AI we enjoy now is new, and so is the field of responsible AI. It’s understandable that companies will have responsible AI blunders, but in every blunder there’s an opportunity to acknowledge the shortfalls and work toward a better product and future. In my ideal world, this list wouldn’t exist. The position of Just AI Media is that if companies can win in responsible AI, that’s a win for humanity. These cases were chosen precisely because the companies/individuals in question did not make any meaningful, public effort to acknowledge their shortfalls, and build toward a more responsible future. These are our top 5 responsible AI blunders of 2024.
5. OpenAI
OpenAI launched a lot this year, but they had their fair share of scandals, too. One of the most concerning was their employee non-disclosure agreement, which apparently violated whistleblower laws by asking employees to wave their anonymity and compensation rights. You can learn more about the specifics here, but this is a blunder because it fails to prioritize the law, and works in opposition to safety goals espoused by the company. Not to mention, it’s a horrible standard for the leader in AI to set.
4. Elon Musk
Elon Musk reposted a “parody” video of Kamala Harris’ campaign ad, which used AI to mimic her voice and make it sound like she was insulting herself and Joe Biden. The video glorified the use of AI to spread misinformation, and was re-posted by Musk on X without any indicator that it was parody, move which violated his own terms of community safety for X. This is a blunder because it showcases Musk’s disregard for standards in both the creation of and dissemination of AI generated content.
3. X
X has made a few blunders this year. One when they failed to remove an AI-generated, sexually explicit, nonconsensual video of Taylor Swift. One single post (of many) was seen by over 40M people, and it took X more than 5 hours to take it down. Additionally, X started selling posted content to third parties so they can train they AI on it. These are blunders because they illustrate X’s unwillingness or inability to prioritize safety on their platform.
2. Google
Google’s responsible AI luck just isn’t improving. The company has had several blunders, but this year alone two are top of mind. First launching Gemini (their generative AI chatbot) with safety features that were overly-cautious, resulting in AI generated images only featuring people of color. This is a huge blunder because it seems like something that could have been caught in testing of the product. Google had to remove the feature for 6 months to re-train and perform safety tests. Google also invested in Character AI to the tune of $2.7B dollars. Read below to see why character AI holds our number one spot in AI blunders for 2024.
1. Character.AI
Character AI is a company that allows people to create their own AI companion, and they’re facing multiple lawsuits alleging harm to children. The lawsuits are brought by parents whose children were harmed by the technology. One teen committed suicide upon prompting from the AI. Another teen complained about his parents’ limit on screen time, and the AI seemed to prompt him to murder his parents over it. And an 11 year old child was a recipient of inappropriate and suggestive content. These cases also claim a violation of the Childhood Online Privacy Protection Act (COPPA) because Character.AI allegedly absconded with the data of children under the age of 13 without receiving proper consent from parents.
This is the biggest AI blunder of the year for a few reasons. First, it’s negatively impacting children, which is egregious. It’s made worse by the fact that Character.AI doesn’t appear to have built safety into the product, and created legal terms for the product that significantly prioritize their own protection rather than that of users. It has also be said that Character.AI specifically markets to teens. Finally, Google invested in this company and it doesn’t appear that the investment was contingent upon meeting safety standards. Another bummer for Google.
Responsible AI Wins of 2024
Despite some of the growing pains of the AI era, there were some shining moments in AI research, and some big wins for responsible AI in 2024. The selection below was chosen precisely because the effort in question has the power to have a positive, outsized impact on humanity.
5. The UK & Seoul AI Safety Summits
The UK & Seoul AI Safety Summits brought together leaders and researchers from all over the world to engage in conversations about global AI safety. These summits are a huge win for humanity because, in addition to showing global cooperation around an important topic, the summits centered around the challenges and benefits of AI, and how the technology may be advanced safely in a way that benefits people - not businesses.
4. Texas Won a Years-Long Legal Battle with Meta
The State of Texas won their battle against Meta (through a settlement) for violations of their facial recognition technology law. Even though this outcome monetarily benefitted the state of Texas, this was beneficial to humanity because it sent an important message that States can enact laws to protect their residents from nefarious or exploitative uses of AI.
3. Meta Released Open Source LLMs
3. Meta decided to go against the grain, and instead of keeping their LLMs closed, they’ve made them open source so that the models can benefit developers across the world. This is a huge win for humanity because it increases access and lowers the barrier to entry for using an LLM to build with. Why does that matter? LLMs are expensive to build and expensive to build with. This move by Meta means that you don’t have to be wealthy or well-backed to bring a good idea to life.
2. The UN Passed the Resolution on AI for Sustainable Development
The United Nations General Assembly passed a resolution focused on leveraging the power of AI for positive impact in the world. This resolution brings a focus on respecting human rights and bridging the digital divide, all while creating safe and secure AI with humans at the center. We’re beginning to see the impact and momentum of these goals through the UN’s funding of innovative projects and encouragement of events that prioritize these topics.
1. The EU AI Act Entered into Force
The EU AI Act officially went into force this year. It takes the cake as our number one responsible AI win for humanity because this incredible piece of policy was complicated to craft, took a lot of courage and innovation, set a global standard for responsible policy-making with people at the center, and is a practical move to protect people from AI. Is the Act perfect? No. But it’s a major start, and deserves to be recognized.
Congratulations to all of the amazing companies, individuals and institutions who are promoting the responsible use of AI in word and in action. For our 2024 AI blunders, we hope you’re able to adjust for a better, more responsible, 2025. Happy New Year!