14 Comments
User's avatar
Janet Salmons PhD's avatar

Fantastic, accurate checklist.

This article shows the workers' side of the issue:

Bosses say AI boosts productivity – workers say they’re drowning in ‘workslop’

Graham Lovelace's avatar

Utterly brilliant! I think you've covered everything there Jim. Great job.

Jim Amos's avatar

Thanks Graham. Let’s see how many corporate leaders print this out for their next all-hands LOL

Digital-Mark's avatar

They are still in Meta lalaland. 😂

Chris Buijs's avatar

No one in their right minds would sign off on this... And yet...

Karen Smiley's avatar

That checklist is what real "informed consent" by CxOs should look like, Jim. Well done.

illustr8d's avatar

As someone who is literally struggling to have any art career at all due to realizing that the ethics of AI may not catch up in my lifetime, I am now in the unenviable position of either not sending any of my art out there (and I've chosen that for several years) and literally not having a career at all or (and I'm here, kicking and screaming) putting my art out there knowing it will be stolen, knowing someone will use it to try and take my customers away, knowing that I will have literally no power over something I have ownership over, knowing it will feel like assault over and over again. At any rate, I appreciate this.

(And I'm retired from a medical job where we were very attentive about HIPAA compliance. It's horrifying to see everyone be so cavalier about it. Glad I'm retired.)

Digital-Mark's avatar

By Information security (where cybersecurity lives) and Governance Risk and Compliance there's no such thing as AI Governance. First it doesn't have a leg stand on legally and secondly you cannot enforce a form of governance without cybersecurity and GRC. The AI demands none of that and completely disregards any data protection reasoning whstsoever, so the term "AI governance" is non-existent.

Vee (PhD)'s avatar

Thanks for sharing this perspective Jim. We truly need to be honest about these realities to collectively figure out better options. What are some alternatives you would suggest? Is local AI a better option?

Jim Amos's avatar

It might be. Depends how it was sourced and how it was trained. Anything trained and sourced ethically will be much smaller and less capable because they're being produced by smaller groups with limited resources and have a smaller dataset to train on. I go a step further and ask, do we even need generative AI at all? Their capabilities are grossly exaggerated while their flaws and sheer incompetence is not publicized enough. There are also too many security and psychological risks to count, let alone the environmental impact. For what? Some chatbots, or clumsy agents that can write emails or order a pizza? Honestly the world has gone mad.

Vee (PhD)'s avatar

Thanks so much for this thoughtful response, Jim. That is truly the million dollar question... do we even need Gen AI? I really don't know. As I'm not a technologist and still so new to this space, there's so much I'm still trying to wrap my head around when it comes to the development of AI. No doubt, the documented harms are sadly immense... and that's what got me first interested in learning more about AI.

That said, I have also heard about a few benefits of task-specific AI (i.e. not frontier) mentioned by ex-silicon technologists who are anti-capitalist and are advocating for responsible development and use of AI. Is this even possible, that I honestly don't know? Still learning, still trying to make sense of it all. Thanks again for sharing your insights. 🙏🏾

Mark A. Bassett's avatar

Nice list. Here’s mine:

𝗢𝗻 𝘁𝗵𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝘃𝗲 𝗼𝗽𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻 𝘁𝗼 𝗚𝗲𝗻𝗔𝗜

☐ I accept that I will collapse all GenAI systems into a single, simplistic category

☐ I accept that I will ignore models only trained on licensed or proprietary data where it undermines my argument

☐ I accept that I will evaluate the entire technology based on the worst output I can generate in 30 seconds

☐ I accept that I will ignore any output that contradicts my position

☐ I accept that I will describe all use by students as “cheating”

☐ I accept that I will pretend I can reliably distinguish AI-generated text from human writing

☐ I accept that I will treat my inability to use the tools effectively as evidence that they are ineffective

☐ I accept that I will dismiss demonstrated expertise as “just prompting tricks”

☐ I accept that I will insist there are no legitimate use cases in education, despite this being demostrably false

☐ I accept that I will frame nuance as moral weakness

☐ I accept that I will present hypothetical harms as current, widespread realities

☐ I accept that I will conflate possibility with inevitability

☐ I accept that I will ignore how I routinely used AI in countless products for years and was fine with it

☐ I accept that I will demand perfect outputs from GenAI while tolerating mediocrity from humans

☐ I accept that I will position myself as defending standards without specifying what those standards are

☐ I accept that I will treat disagreement as ignorance rather than engage with it

☐ I accept that agreement with my position is the only acceptable position

☐ I accept that I will selectively forget my dependence on the same “big tech” industry I now criticise

☐ I accept that my conclusion was reached first, and my reasoning assembled afterwards

☐ I accept that I will resist adapting my practice while insisting the environment has not changed

☐ I accept that I will dismiss critiques exposing inconsistencies in my position as “whataboutism”

Mark A. Bassett's avatar

Oh one more..

I accept that I am posting this from a mobile device containing rare earth minerals that are basically the blood diamonds of the electronics industry, manufactured through supply chains I have never scrutinised, on a platform whose data centres produce carbon emissions that far exceed anything I have attributed to the AI tools I am critiquing​​​​​​​​​​​​​​​​.

Jim Amos's avatar

The inevitable false equivalence strawman has arrived. Whatever man, you're part of the problem.