7 min read
For about a year and a half now, companies have been investing increasingly in generative AI – artificial intelligence that can generate text. Many people are still waiting for the pay-off, wondering if this is a real phenomenon or a massive hype. So, is generative AI real or fake? The answer is both. It’s real, and it’s fake. It’s here to stay, and the question now is how to use it.
Understanding Generative AI
Generative AI refers to artificial intelligence models that can generate new content, such as text, images, or audio, based on data on which it has been trained. These models can understand and process natural language, making them versatile and powerful. A well-known example of generative AI is the language model developed by OpenAI called ChatGPT, which can converse in a human-sounding way and answer questions, as well as create various pieces of text such as stories and poems.
Adoption and Implementation
Like any new technology, the adoption and implementation of generative AI is a journey, often bringing its own set of challenges. While the first stage involved learning and exploring for what’s possible, organisations must now begin to hone and improve use cases, and optimise them for their business cases.
It’s one thing to bring generative AI into an organisation, and quite another to do it in such a way that it becomes embedded safely and effectively. Organisations will need to consider everything from their security structures to who buys into the idea and what needs to be optimised. The technology can’t be implemented in isolation; it’s also the culture that will foster ways of working with it, share good practice, and spot people or processes that aren’t working as they should.
IgniteTech, a software holding company, is another example of a company that has successfully incentivised the use of generative AI. The CEO gave his employees free access to GPT-4, and offered cash incentives for the best prompts, thereby encouraging employees to use and experiment with the prompts.
Here are some examples of generative AI:
1-Generative AI in Content Creation
Content creation – the most commonly applied use of generative AI tools – is what springs to mind: articles, blog posts, the scripting or drafting of whole books on a given topic. Simply enter the subject matter and, voilà, ChatGPT and similar tools can produce summaries, outlines, or full-length pieces.
For example, a marketing team could use generative AI to rapidly produce social media posts, product descriptions or email newsletters that would fit its brand voice and target audience, and writers and journalists could use it to research ideas and sources, generate story ideas, or draft articles or scripts.
However, despite generative AI’s seeming proficiency, human review and editing will continue to help ensure accuracy, fact-checking, and the poetics of voice and style.
2-Generative AI in Design and Creative Work
Generative AI is also starting to disrupt creative industries, including the creation of images, artwork, animations and even, yes, music composition. Generative AI techniques allow users to create one-off images, artwork and other designs based on text prompts – or, even more intriguingly, other images. DALL-E, Midjourney and Stable Diffusion are all generative AI systems that work to fill the prompt with content in a way that is as close as possible to the user’s intent.
For example, a graphic designer might use generative AI to quickly iterate design concepts, create mockups, or even generate design assets (such as icons, patterns or textures). Such generative AI could allow designers to more quickly explore ideas and iterate over potential executions, leaving more time for refinement and creative direction.
Musically, tools such as Riffusion and Mubert are capable of generating new melodies, harmonies and even full songs from user-requested inputs and preferences. Not a replacement for the human spark of creation, AI can serve as inspiration and potentially tool for the composition and songwriting process.
3-Generative AI in Data Analysis and Visualization
Generative AI can also be deployed to analyse and visualise data. Working with large data sets, this type of model can produce insights, summaries and visualisations that would be difficult or even impossible for humans to create manually.
A business analyst, for instance, might use a generative AI model to create a report, dashboard or data visualisation from a complex dataset, enabling a better understanding of patterns, trends or takeaway insights.
A scientist could use generative AI to examine their own data from an experiment, simulation, or observational study and visually represent it in a new way, which might help them make a new discovery or come up with a new hypothesis.
Ethical Considerations and Limitations
While generative AI can be tremendously beneficial and have many useful applications, we still need to be conscious of its growing limitations and risks. An important concern is that it might exhibit bias, hallucinate, mislead users, or be used for nefarious purposes.
In addition, copyright and intellectual property rights infringement raise complex ethical issues, given that the models are trained on huge quantities of existing data and copyrighted content. These issues must be addressed as relevant when commercial applications are created using generative AI.
And simultaneously, it’s vital to remember that generative AI is simply that: a tool. A hugely complex, sophisticated, powerful tool. But a tool nonetheless. And just like any other tool in any other field of human endeavour, it should be applied by humans – those immeasurably valuable entities with minds and fingers and hearts, who hold the responsibility for applying new tools to themselves with thoughtful care and discernment.
Ethical Considerations
With all its benefits, generative AI is not without its ethical concerns. Maybe it’s the high carbon computational cost of training these models. Maybe it’s the possible destruction of media environments. Maybe it’s the potential to undermine copyright conventions. Or maybe it’s that we just don’t trust our employees.
Organisations need to think more about these implications and put in place rules and policies for how to use generative AI responsibly and ethically.
The Future of Work
Currently, Generative AI is likely to change how we work in nearly every sector. Tasks that were previously thought to be uniquely human in nature can be eliminated or augmented by these powerful models. Whether this leads to worker displacement or the ability to reskill and upskill has broad policy implications, but Ethan Mollick, professor at the Wharton School, believes that the solution is to focus on what AI can’t do, which is ‘Just Me’ tasks, delegated tasks, and automated tasks.
Writing recommendation letters or grading homework assignments, for instance, might still be ‘Just Me’ tasks, in which human touch and thus investment are valued. Other key tasks, though – research, writing, data analysis – might be farmed out to AI.
These are the things that a human should do, either because that’s something the human wants to do themselves or because that’s something that requires a human that cannot be automated. Mollick gives the following examples.
Writing letters of recommendation: Mollick says that, as a professor, he still writes letters of recommendation by hand because it’s a marker that he cares about the student and is deliberately taking the time to do the task. An AI would likely write a better letter, but the human effort is part of the meaning.
Grading student assignments: Similarly, Mollick still grades assignments by hand. He feels that, as a professor, he has an obligation to look at each student’s work directly, even though an AI could do that more accurately.
Journal reviews: When he reviews academic papers, Mollick writes the reviews himself, but then uses AI to review the paper as well, to see if there are discrepancies.
The basic idea is that some things are just not going to be fully automated. Whether because for cultural or social reasons we value the human touch, personal investment or human judgment, or simply because some things are easily automated without actually benefiting from the automation, there are moral or principled lines in the sand that we wouldn’t want to cross, even if we find that an AI could arguably do things technically better than us. There are open questions about whether to use AI on tasks such as letters of recommendation, where output may be superior but the human investment itself has signalling value Here, organisations will need to suss out which tasks are worth keeping human, which could be automated away or fully complemented by machine learning, and which could be augmented. So, Just Me tasks are tasks that are believed to be truly ‘essential’ for some reason – either because the human investment still is valuable in and of itself, or because human judgment and oversight is needed that cannot be fully automated or replaced by AI systems, at least not yet. Wherever the moral lines in the sand are drawn, we should expect them to shift as AI powers increase.
Embracing Change
This means that, just as with any disruptive technology, there will be winners and losers as organisations and individuals adapt to the possibilities that generative AI opens up. Leaders will need to formulate a roadmap for what kind of organisations work best in this new world, and need to encourage their people to experiment and innovate.
As Ethan Mollick, an operations and information management professor at the Wharton School of the University of Pennsylvania, recently told CNBC, managers can encourage others to experiment with generative AI by modelling the approach, telling others when it works, and when it doesn’t.
This technological shift is important for senior leadership because it merits being put on the org chart as an organisational priority – and investing in figuring out how to harness generative AI may be the best path to becoming a 21st-century ‘technological visionary’, and in building the next ‘great enterprise’.
Conclusion
Presently, a new technology morphs and evolves faster than ever in the workplace: generative AI (GPT, ChatGPT). Although this new technology heralds promising prospects for higher productivity and innovation, it also stirs ethical reservations and flattens traditional organisational structures.
In this new environment, organisations must continue to experiment, change and create a vision for strategically using generative AI in their operations to stay ahead of the curve.
Author
Aows Dargazali, Regional Winner for Europe at Meta, is an entrepreneurial Chartered Manager and an alumnus of Oxford University Executive MBA and the associated programme there. He leads an organisation that was recognised by The Daily Telegraph for its use of innovative technologies in education and healthcare. With 10 years of experience in developing technologies for gamified learning, Aows writes about the impact of AI in education and health at UniHouse.