An excited buzz spread within the tech community throughout 2023 as a number of technological milestones and new AI tools made the headlines. Over the last twelve months Artificial Intelligence sparked conversations across industries and has set a new era in motion.
It is almost too obvious that 2023 will be remembered as the year generative AI emerged, tools trained on vast datasets that create outputs based on various prompts. McKinsey’s annual survey showed a staggering 79% exposure to generative AI among respondents, with 22% of them confirming regular use of AI tools. Some highlights made this a very transformative year:
OpenAI's launch of a subscription-based model of ChatGPT, empowering third-party developers to seamlessly integrate ChatGPT and Whisper (voice-to-text-model) into applications such as Home Assistants through a pay-per-use API. Their introduction of GPT4, a more robust and advanced large language model (LLM), reinforcing the evolution of AI and broadening ChatGPT's capabilities by incorporating image processing.
AI might help find new cures in medicine. NVIDIA has broadened its product portfolio with the introduction of BioNeMo, a sophisticated AI-driven platform intended to predict the structures of proteins, a crucial step in accelerating the development of new potential drugs. Additionally, NVIDIA has expanded its collaboration with Microsoft, aiming to streamline accessibility to its products for users on the Azure platform.
Meanwhile, AWS has launched an ambitious initiative named 'AI Ready', a free of charge, online training program aimed at helping both tech-nerds and non-tech professionals with fundamental skills essential for careers in generative AI. This initiative not only addresses the pressing shortage of skilled AI professionals but also sets a goal to train up to 2 million individuals by 2025, paving the way for current and future workforce needs. On top of that, AWS is rolling out scholarship programs to promote AI education in high schools and universities worldwide.
Google’s release of Bard, its chatbot designed as a counterpart to ChatGPT, while Microsoft announced plans to integrate ChatGPT into its product lineup. Not to be outdone, Google introduced PaLM2, an advanced LLM intended as a direct competitor to OpenAI's GPT4, making it a heated generative AI race.
Meanwhile, HeyGen was celebrated in social media circles, using generative AI to empower video content creators. This groundbreaking service allows creators to create avatars resembling themselves and generate video content and translations of it in numerous languages, all in the creator's own voice. Impressively, HeyGen automatically adjusts facial expressions and lip movements to synchronize with the translated content, revolutionizing content localization.
But some of this year’s advancements come as an answer to certain concerns. MIT researchers developed PhotoGuard, a pioneering tool capable of encoding photos with unique tags to prevent non-consensual deepfakes. By tricking AI models into using incorrect information for image generation, PhotoGuard offers a safeguard against misuse of visual content. The University of Chicago presented Glaze, a tool designed to detect and encode an artist's distinctive style into photos. These encoded images deter or prevent mimicry of the original artist's style, making a significant step in development to protect artistic integrity in the digital world.
Looking ahead, 2024 will not put a hold on the development and we will continue to see a numerous creation of AI tools, although their pace of development and adoption might face some obstacles due to hardware shortages and legal concerns surrounding the use of AI and its outcomes. The impending enforcement of the EU AI Act in 2024 sets a ticking clock for businesses involved in AI systems or components, potentially forcing them to align with the regulations rather sooner than later.
When you scratch beneath the AI surface, it’s clear the hype will soon evaporate. LLMs are typically quite difficult to use, because they are unable to understand context or provide reliable outputs, which restricts the wider use. This year we might therefore see businesses scale back their use of LLMs.
As current tools will become more functional and user-friendly, the next generation of generative AI (Gen AI) tools will go far beyond the chatbots and image generators that have amazed and sometimes scared us in 2023. This next wave will show multifaceted capabilities beyond textual interaction. Multimodal AI tools adeptly comprehend and process information from diverse data types like images, videos, audio, and more. Such advancements enable engineers to build AI agents tailored for a spectrum of tasks, while LLM-powered agents mostly facilitate uninterrupted back-office operations.
For instance, ChatGPT's voice feature streamlines tasks by eliminating manual typing. Users can articulate commands while the LLM retrieves summaries, generates charts, devises fiscal plans, and more. With Gen AI alleviating daily tasks, we as humans can redirect our focus towards strategic and innovative projects.
Google's Gemini, a multifaceted AI launched in December '23, sets out to seamlessly reason across text, images, video, audio, and code. This product is meant to bridge the gap between Google and OpenAI. While Gemini launched with robust benchmarks and a captivating video demonstration, a closer look revealed flaws, leaving critics with more questions than answers.
As AI applications will most likely grow in complexity, ensuring oversight of these AI tools becomes increasingly important. Having people with knowledge to analyze and debug will be essential in handling risks such as errors and AI’s biases in planning, task execution, and reflection processes. Human supervision will remain an integral part of solving these issues. Simultaneously, establishing responsible Gen AI principles throughout the developmental is a necessary and critical imperative.
With the upcoming U.S. election, we will most likely see an increase in manipulated videos, portraying politicians or others saying things they never did, posing a significant concern for consumers and voters alike. Legislative actions are on the horizon, notably within the EU, where the final EU AI Act will start to have its first impact in 2024. However, we still don’t know how this will affect major American tech companies and their AI models. We might see some states implementing laws, but overall, we might not see any significant regulation in the U.S..
Our expectations for 2024 include a surge in startups and companies like OpenAI unveiling larger AI models, with a wide range of new functionalities. Nonetheless, the debate surrounding AI and its definition will not be closed, overshadowed by concerns about current challenges like disinformation and the spread of deepfakes. While fears of AI dominating the world are more Hollywood material and conspiracy hype, our imminent focus should remain on addressing the present harms inflicted by disinformation and manipulated media.
We are used to high-speed internet connections and applications running smoothly that we hardly think about the hardware needed for it. GPU processors are essential to AI operations and as more companies seek AI capabilities, an unprecedented demand for GPUs, primarily produced by companies like NVIDIA, will emerge, potentially limiting availability. The scarcity of GPUs will not only impact companies’ competitiveness but will also impact organizations developing AI innovations.
Consequently, we will see a higher pressure on increased GPU production and for innovating more cost-effective and user-friendly hardware alternatives. Ongoing research in electrical engineering at Stanford and other institutions are already exploring low-power substitutes for existing GPUs.
It might take a while to get from a research phase to a widespread availability of such a solution, but the focus on accelerating such developments is crucial in order to keep democratic access to AI technologies.
In 2019, Gartner foresaw a surge in low-code/no-code tools dominating 65% of app development by 2024. This prediction aligns with the rise of generative AI tools such as ChatGPT, enabling app creation and testing within minutes. While coding and software engineering roles won't disappear, because someone needs to develop these AI tools, 2024 will present a great opportunity for creatives that like to solve problems but lack those necessary technical skills.
AI has undeniably transformed the lives of high school students, simplifying writing essays and helping with homework, but regrettably, it has also empowered cybercriminals. The increasing number of AI-generated code for hacking in 2023 marked a concerning trend and is likely to intensify in the upcoming months.
One concerning innovation that has gotten some attention in online forums is WormGPT, a LLM trying to copy ChatGPT but without any ethical safeguards and made for malicious intent. The tool has been reportedly used to help facilitating hacking campaigns and is a significant leap in the arsenal of cybercriminals.
Once a LLM is built to execute whatever malicious prompt the issue lies in the speed and volume of scams such an AI language model produces, especially when used as a weapon by professional hackers. We have all seen the speed at which these models generate text and how they easily replicate certain recipes. Now imagine this output being cyberattacks like phishing emails or malicious code, even criminal first timers are able to create.
But ChatGPT poses a security risk from within organizations too. The rush of blindly using LLMs in whatever integrations or form at work, isn't without risks. It has been reported that these hyped AI tools lack data privacy standards and hackers might even be able to simply get your company's internal data, in case an employee has been feeding it to the LLM. A data leak without any network infiltration at all.
But there is hope in the form of a two-way trend. AI has been an established part of cybersecurity for years and as we move into 2024, there's an expectation for AI to evolve and grow with our challenges. Organizations are already adapting by integrating machine learning tools into their SOC platforms and set-up.
It seems like a cybertale as old as time, but the solution lies in proactive measures. IT security professionals must focus on reducing breaches caused by AI-generated code, by rigorously monitoring their networks for any abnormalities. Moreover, organizations need to establish clear policies on approved AI tools and their proper usage to ensure compliance with existing data privacy and cybersecurity standards.
Looking ahead, vigilance is key. Predictions suggest a surge in cybercriminals exploiting AI and outdated security measures. There's also a looming threat to Managed File Transfer systems and file servers, making them lucrative targets. Strengthening defenses is imperative, especially against potential future attacks like torrents, which facilitate the release of sensitive information.
One defense strategy that has proven its effectiveness involves understanding your network activity and user behavior within your organization. By closely monitoring expected user behavior, slight anomalies can indicate a security breach. For instance, a sudden and unusual amount of data being uploaded could raise red flags signaling a potential threat.
Something AI can already detect today, and we expect it to be even more nuanced in the future to identify the stealthiest hackers.
Ethical, legal, and socio-political questions regarding AI will only get more complex and difficult. As a result of political discussions and decisions in 2023, the EU's AI Act is set to come into effect in the first half of 2024, being a monumental step in regulating the entire spectrum of AI development, distribution, and deployment within the EU. This new legislation adopts a risk-based approach, categorizing AI practices into unacceptable, high-risk, limited-risk, and minimal or no-risk tiers, each subjected to varying degrees of regulation. With non-compliance fines, reaching up to EUR 30 million or 6% of global turnover, companies will face significant challenges, big AI players and newbies on the market alike.
The new year will also manifest the need for international collaboration on AI safety. While the EU spearheads AI legislation, countries like the US and UK have addressed this issue in a different way. The Biden administration's Executive Order in October 2023 outlined a holistic approach in order to build trustworthy AI, coinciding with the UK-hosted international AI Safety Summit that resulted in the Bletchley Park declaration, signaling the beginning of some collaborative efforts on the matter. Notably, the UK and other nations announced measures surrounding AI regulation during and around the summit, setting the stage for further discussions and potential summits dedicated to AI regulation and safety. Knowing the usual pace of the political apparatus, the development of new technology might already make it to the finish line before politicians know which distance they will need to run. Trying to keep up, two AI Safety Summits are already planned for mid and end of this year, where 28 countries, including the United States, China, and the European Union, will follow up on recent developments.
There is no doubt that generative AI will transform the cybersecurity landscape in significant ways. But as we step into 2024, our outlook for AI and how it will affect the rest of the world remains optimistic. 2023 was one big hype, but we will be a little less delusional and more focused going forward. The year ahead will be the birth of several new AI models with a diverse number of capabilities, presenting new risks as well as great opportunities. Cost considerations, limited production of hardware and regulatory policies will have a substantial influence over enterprise adoption of AI tools, steering the course of the general integration and impact of AI in organizations. As the year unfolds, navigating these developments will undoubtedly shape the impact of AI's role in driving innovation and efficiency across industries and especially within cybersecurity.
We are a dynamic team of creative strategists and digital experts committed to spread the word about anything cybersecurity. We do more than just selling a network detection and response system; we keep our fingers on the pulse of cybertrends and share the knowledge, we have within Muninn.
Subscribe to our newsletter to receive new posts straight to your inbox