Sorry folks, but AI isn’t ‘just a tool’

In the handful of years since generative AI became both a zeitgeist technology and common dinner table conversation topic, people across the design industries—ranging from independent graphic designers to tech executives—have landed on a curious mantra to justify its use: it’s just a tool.  In this very publication, in 2023, designers Caspar Lam and Yujune Park, wrote that “if we see a designer’s role as communicating and connecting ideas to humans in meaningful ways, AI image-generation becomes another tool and avenue for creative expression.” This perspective is not unique to them. Josh Campo, the CEO of Razorfish, extolling the virtues of AI for creatives in Forbes, wrote that, “beyond enhancing efficiency, AI is opening doors to possibilities that creative teams didn’t have previously,” but he cautions readers to remember that “AI is just a tool.”  As part of a CNBC feature on graphic design and AI, Nicola Hamilton, president of the Association of Registered Graphic Designers (Canada), says that one of the most repeated statements about AI by designers is, indeed, that it is “just a tool.” She precedes this observation by noting that “dealing with new technology is nothing new” for designers. Some have even gone so far as to suggest AI is like a pencil. In a LinkedIn post, Peter Skillman, the global head of design for Philips, tells us that “Al is just a tool,” and then offers us to engage with his post by asking: “What’s your take on Al in the context of humanity-centered design?”  My take, if you’re not going to read the rest of this article, is that AI is very bad for the world, Peter. Very, very bad.  I think it’s important to note that not everyone who is excited about AI (nor the folks who are concerned about it) is an adherent of the “just-a-tool” logic. There’s also the “it’s not just a tool! It’s even better!” crowd. I’ll refrain from engaging with this form of AI boosterism because I think that the “just-a-tool” logic is more difficult to dismiss. The “it’s not just a tool” crowd also includes folks circulating other AI promotional discourses such as, “AI isn’t just a tool, it’s a creative partner” and “it’s not a tool, it’s a paradigm shift.” These and other superlatives, however, like the “just-a-tool” logic, mask the material and ideological realities of AI, as well as its class politics—the way its use furthers the exploitation of the working class by the capitalist class.  The great AI ‘panic’ One of the pillars of the “just-a-tool” logic is to suggest that those who are skeptical or worried about any new technology are simply “panicking” technophobes or just don’t understand it. Using this approach to accuse the more deliberative and discerning members of society of being somehow opposed to progress is much more effective than the “paradigm shift” or “it’s more than a tool” approach to talking about AI. It might seem reasonable to be apprehensive of a “paradigm shift,” but it feels much less reasonable to have reservations about something that is “just a tool.” Indeed, if, as Hamilton said, designers have been dealing with new technologies for as long as the field itself has existed, then any apparent panic by a designer to AI must be an overreaction. New technology, says Hamilton, is an “evolution,” and, by this logic, to resist an evolution that is itself merely a tool is to be construed as opposing progress without reason. And even if one is panicking, the adherents of the “just-a-tool” logic might remind us that “technological panic is not new.” To construe resistance to new technologies—regardless of their real impacts—as “panic” is designed to frame any kind of skepticism as unreasonable.  But panic is precisely what we should be doing. We should panic about generative AI, in part because its harms far outweigh any benefit to any designer or any member of the working class. When one looks at the landscape of the actual uses of AI—from political disinformation campaigns to AI CSAM to non-consensual sexually explicit material, to voice–cloning used to scam people out of their life savings—panicking seems pretty reasonable.  Even if the aforementioned panic appears reasonable, we supposedly have nothing to worry about when it comes to concerns about job loss. Hamilton tells us that “[AI] will likely make some designers redundant. . . . In the same way that Canva made some designers redundant, or the introduction of computers pushed some folks out of the industry. It’s all the more reason . . . to look for ways we can make it work for us.” Many in the capitalist class—such as the World Economic Forum and Price Waterhouse Cooper—have gone as far as telling us that AI will create more jobs than it eliminates.  Though some folks who are invested in the maintenance of the status quo have attempted to substantiate this claim, there are three issues that I think complicate it. First, some job loss attributed to automation, as Aaron Benanav so elegantly demo

Apr 14, 2025 - 11:13
 0
Sorry folks, but AI isn’t ‘just a tool’

In the handful of years since generative AI became both a zeitgeist technology and common dinner table conversation topic, people across the design industries—ranging from independent graphic designers to tech executives—have landed on a curious mantra to justify its use: it’s just a tool. 

In this very publication, in 2023, designers Caspar Lam and Yujune Park, wrote that “if we see a designer’s role as communicating and connecting ideas to humans in meaningful ways, AI image-generation becomes another tool and avenue for creative expression.” This perspective is not unique to them. Josh Campo, the CEO of Razorfish, extolling the virtues of AI for creatives in Forbes, wrote that, “beyond enhancing efficiency, AI is opening doors to possibilities that creative teams didn’t have previously,” but he cautions readers to remember that “AI is just a tool.” 

As part of a CNBC feature on graphic design and AI, Nicola Hamilton, president of the Association of Registered Graphic Designers (Canada), says that one of the most repeated statements about AI by designers is, indeed, that it is “just a tool.” She precedes this observation by noting that “dealing with new technology is nothing new” for designers. Some have even gone so far as to suggest AI is like a pencil. In a LinkedIn post, Peter Skillman, the global head of design for Philips, tells us that “Al is just a tool,” and then offers us to engage with his post by asking: “What’s your take on Al in the context of humanity-centered design?” 

My take, if you’re not going to read the rest of this article, is that AI is very bad for the world, Peter. Very, very bad. 

I think it’s important to note that not everyone who is excited about AI (nor the folks who are concerned about it) is an adherent of the “just-a-tool” logic. There’s also the “it’s not just a tool! It’s even better!” crowd. I’ll refrain from engaging with this form of AI boosterism because I think that the “just-a-tool” logic is more difficult to dismiss.

The “it’s not just a tool” crowd also includes folks circulating other AI promotional discourses such as, “AI isn’t just a tool, it’s a creative partner” and “it’s not a tool, it’s a paradigm shift.” These and other superlatives, however, like the “just-a-tool” logic, mask the material and ideological realities of AI, as well as its class politics—the way its use furthers the exploitation of the working class by the capitalist class. 

The great AI ‘panic’

One of the pillars of the “just-a-tool” logic is to suggest that those who are skeptical or worried about any new technology are simply “panicking” technophobes or just don’t understand it. Using this approach to accuse the more deliberative and discerning members of society of being somehow opposed to progress is much more effective than the “paradigm shift” or “it’s more than a tool” approach to talking about AI.

It might seem reasonable to be apprehensive of a “paradigm shift,” but it feels much less reasonable to have reservations about something that is “just a tool.” Indeed, if, as Hamilton said, designers have been dealing with new technologies for as long as the field itself has existed, then any apparent panic by a designer to AI must be an overreaction. New technology, says Hamilton, is an “evolution,” and, by this logic, to resist an evolution that is itself merely a tool is to be construed as opposing progress without reason. And even if one is panicking, the adherents of the “just-a-tool” logic might remind us that “technological panic is not new.” To construe resistance to new technologies—regardless of their real impacts—as “panic” is designed to frame any kind of skepticism as unreasonable. 

But panic is precisely what we should be doing. We should panic about generative AI, in part because its harms far outweigh any benefit to any designer or any member of the working class. When one looks at the landscape of the actual uses of AI—from political disinformation campaigns to AI CSAM to non-consensual sexually explicit material, to voicecloning used to scam people out of their life savings—panicking seems pretty reasonable. 

Even if the aforementioned panic appears reasonable, we supposedly have nothing to worry about when it comes to concerns about job loss. Hamilton tells us that “[AI] will likely make some designers redundant. . . . In the same way that Canva made some designers redundant, or the introduction of computers pushed some folks out of the industry. It’s all the more reason . . . to look for ways we can make it work for us.” Many in the capitalist class—such as the World Economic Forum and Price Waterhouse Cooper—have gone as far as telling us that AI will create more jobs than it eliminates. 

Though some folks who are invested in the maintenance of the status quo have attempted to substantiate this claim, there are three issues that I think complicate it. First, some job loss attributed to automation, as Aaron Benanav so elegantly demonstrates, is the result of deindustrialization and a shift to a much less employment-stable service sector, with underemployment and underreported unemployment becoming significantly more commonplace. Second, innovation under capitalism is characterized by a “race to the bottom,” or attempts to cut costs at every turn. Today, technologies such as genAI often serve to lower operational costs in a quest to juice quarterly earnings and ensure that the stock buybacks offered to shareholders are as lucrative as possible. And lastly, technology does not operate within a vacuum. It does not operate along some predetermined line of “development,” and it doesn’t just *poof* appear without people determining its design criteria, meaning how it functions and who benefits from those functions. 

The reality is that any efficiencies gained from the use of AI are not beneficial to anyone that doesn’t already have power and privilege in society. For the working class, it doesn’t really matter if more jobs get created, or if we are more productive, because most of the benefits will  accrue to a shrinking number of capitalist oligarchs. Meanwhile, everyone else still suffers under conditions of decreasing real wages and increasing precarity. The class politics of this situation are crucial for clearly assessing advances in AI. 

The myth of human centricity

The “just-a-tool” logic resonates with the idea that designers can be liberated to concern themselves with the choreography of systems and not pixels. In its 2025 Future of Jobs Report, The World Economic Forum pegged Graphic Design as the 11th fastest “declining job” per the predictions of employers (emphasis mine). UX jobs, along with Service Design, Customer Experience, and other more systems-oriented roles, will continue to grow. So while the nature of design jobs might be changed by AI, maybe the number of jobs won’t really change. And perhaps there’s a mutually-beneficial trade off, in which people who otherwise wouldn’t be able to afford high quality bespoke design work can use generative AI, enabling professional designers to focus their creativity on “wicked problems.” 

Such a perspective, however, is a privileged one and does not take class, capital, or the wellbeing of the planet into account. A systems-level approach to design—one that looks at the journeys of users through product-service ecosystems—should itself take into account the deleterious effects of AI on individuals, societies, and the environment, instead of accepting the purportedly benevolent purposes to which we are told it is put. 

Let’s take a moment and look at the Adobe Express commercial about the founder of Yendy, a skincare brand that seeks to challenge the exploitative nature of supply chains and support small-scale farmers in Northern Ghana. Sounds like a pretty cool company, as far as one can glean based on the information on its website and social media. Adobe’s commercial, however, is effectively instrumentalizing folks from the African continent to promote a technological “tool” that is itself inherently racist and colonialist

Designers who see genAI as “just a tool” might be relatively unbothered by Adobe’s genAI, and might see this commercial as benign, if not heart-warming. But if such designers are truly “human-centered” (or “humanity-centered”) as they might claim, how could they watch that commercial and not think about the people in the Global South being exploited by the very technological developments that enabled the founder of Yendy to use Adobe Express in the first place? What about the colonialist history of AI itself and the ongoing neocolonialism of tech corporations? What about  the global flows of wealth to companies in the Global North from the Global South? Or the environmental implications

Furthermore, suggesting that AI is a tool that enables non-designers to make their ideas into reality while enabling designers to think at a higher level, contributes to the obfuscation of AI (and design’s) real issue: Technological innovation under capitalism is at odds with a just and sustainable way of living for everyone. 

Why a tool isn’t just a ‘tool’

The last thing that I want to say about the “just-a-tool” logic is that the word “tool” itself is not inherently bad. But to suggest that something is just a tool is very problematic, indeed. In 1973, Ivan Illich put forward what is to me the most compelling approach to thinking about tools, which he understands in a broad and far-ranging sense, with tools including everything from hammers to highway infrastructures. Tools enable us to do things, but they also constrain our activities. They shape what is possible and the effects we can have on the world around us. On this account, tools are understood with a nuance that the “just-a-tool” logic itself negates. 

Tools, argues Illich, should be contextualized, understood through their relationships to the people that use them and who are affected by that use. Most importantly, writes Illich, the design criteria for all tools should be democratically determined. This is the opposite of the situation in which we have found ourselves today. In our modern world, AI  “tools” have been foisted upon us by tech oligarchs hellbent on squeezing every last cent of surplus value out of the working class, and because our understanding of the nature of tools is so deeply impoverished, we feel as though we must accept them on their terms. 

But history shows that this also doesn’t need to be the case. Any further developments in AI must be met by resistance like that of the Luddites, who sought to destroy technologies that undermined their craft, exploited and endangered their comrades, and augmented surplus value for the capitalist class without enabling those who lost their jobs to share in the supposed wealth creation. And the working class must demand that the design criteria for any new technological innovation be democratically determined. 
Advances in computing could genuinely benefit the international working class if those very people were able to determine the design criteria for those innovations, taking into account the systemic interrelationships of labor and environment. What those technologies, those tools—including those used by designers—might look like is nearly impossible to imagine today. But if, as Father John Culkin wrote in 1967, “we shape our tools, and thereafter our tools shape us,” we better start reshaping our tools, and we must do so by any means necessary.