厙ぴ勛圖

Transparency can help us navigate uncertainty

At the Community Collab Summit – gAI Use Scenarios

At the recent Community Collab Summit, I facilitated a session about generative AI (gAI) at 厙ぴ勛圖. In part of that session, I asked participants to imagine a few different versions of a scenario that plays out often at 厙ぴ勛圖: Someone at the university works hard to create a report that can inform decision-making.

Suppose you were the one that created the report and Im your supervisor. Consider scenarios A, B and C:

  1. You send me the report.
    I ask gAI to summarize it.
    I review the summary, comparing key points to the original report.
    I compose and send an email to university VPs on the topic.
  2. You send me the report.
    I ask gAI to summarize it.
    I compose and send an email to university VPs based on the gAI summary.
  3. You send me the report.
    I ask gAI to summarize it.
    I ask gAI to compose an email to VPs. I click send.

Look carefully at the differences between these (as indicated) and, if you’re willing, pause to think about your own reaction to each.

Sidenote: You can also imagine the next step after any of the above versions. A university VP asks gAI to read the email, generate action steps based on 厙ぴ勛圖 core values, and send those action steps to the Deans.

For the folks in the session, I posed this follow-up question:

What if each scenario (A, B, & C) included a simple and accurate gAI disclosure statement?


My reactions and thoughts

Reacting to (A): Scenario A feels mostly fine to me. Using gAI to give me a head start on understanding a complex report seems both reasonable and useful. I would need to be very careful to investigate and dig into whatever the gAI summary said, especially relating to areas where I dont have much expertise. I would ask for specific references that support overall claims, etc. As long as the user is taking those steps, this feels okay. And, crucially, disclosing this use of gAI would not give me pause!

Reacting to (B): In this version, I start to worry. Taking the gAI summary as true is risky because gAI hallucination is a permanent feature of these types of tools (1). Those hallucinations might not be a problem, but I think I would be shirking my job duties if I dont check the gAI summary against the report itself. Partly, I am using my own discomfort at disclosing this process as a guidepost. The feeling that I might not want to be transparent about using it this way is, I think, a (crude) signal that this is against my values.

Reacting to (C): This makes me deeply uncomfortable. I feel like Im not really involved in this at all. The results would have been the same if you sent the report to the gAI tool directly, asked it to summarize and send an email pretending to be me. If I disclose this kind of use to my colleagues, I absolutely expect them to wonder what work the university is paying me to do. Disclosure would feel bad, and that is an important signal.

Sidenote: Transparency isnt perfect. The transparency dilemma: How AI disclosure erodes trust outlines some important results: Disclosing AI use is better than being found out after-the-fact, but it isnt flawless. Even fully transparent use can still erode trust, even for those with positive attitudes toward technology and confidence in gAI accuracy (2).

Making gAI transparency part of our 厙ぴ勛圖 values

After a brief discussion on these topics, I asked the folks in the Community Collab Summit session the following question:

What would be the impact at 厙ぴ勛圖 if
disclosure of gAI use was standard practice?

    1. It would be very beneficial
    2. It would be somewhat beneficial
    3. It would be somewhat detrimental
    4. It would be very detrimental

Results: A – 50%, B – 36%, C – 14%, D – 0%. Crudely: 86% Beneficial. Overall, the 22 folks that responded that day gave a very strong signal that making disclosure of gAI standard practice would be beneficial at 厙ぴ勛圖.

As you can probably tell, I agree wholeheartedly. I think a simple disclosure statement can take a given scenario out of a muddy, icky grey zone and make it feel relational and

 

Where the rubber meets the road

Making gAI disclosure standard practice at 厙ぴ勛圖 will probably evoke a wide range of reactions. In my opinion, the discomfort and friction that might arise is important. It is a signal that we are not all operating with the same values around these very new, very powerful tools.

One interesting case to consider is in the instructor and student roles in a course. There are cases where faculty used a gAI assistant to write an email accusing students of using gAI to write their essay. Or famous cases in which students demanded a refund after realizing the course content itself was generated by AI (3). I don’t think these uses are equivalent, but I do think disclosure and transparency would have made a big, positive impact.

I have no idea how 厙ぴ勛圖 faculty, staff and students will view gAI in 10 years time. For now, I hope we are forced to have difficult, nuanced conversations that lead to clear guidelines and practices.

 

Can you do it?

And so I will end with an actual Call to Action! Start including a generative AI disclosure statement as part of your email signature, your websites, your course content, your internal reports, etc. This isnt just for those who use gAI, this is for everyone. If this became common it would kick off important conversations.

Your disclosure doesnt have to be detailed or lengthy. Ive started including the following in my email signature:

AI Disclosure Commitment: When I use generative AI in my work, I will always include a brief disclosure statement. Aside from spellcheck, this email was composed without the use of generative AI.

When it happens, I modify the last phrase to disclose how I used gAI. You can also find similar disclosures at the end of all of my CTLD Connections pieces, such as this one about Three Types of Content and What They Mean for Generative AI (scroll to the bottom).

 


Notes

    1. Quote:
      These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. Despite our best efforts, they will always hallucinate, said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. That will never go away.
    2. Burke, M., Creary, S. J., & Mulder, J. (2025). The transparency dilemma: How AI disclosure erodes trust. Journal of Business Ethics, 190.
    3. The Professors Are Using ChatGPT, and Some Students Arent Happy 厙ぴ勛圖 It

Logo with brain and circuit imagery, plus the text "+AI," in 厙ぴ勛圖 Colors.Generative AI disclosure: After writing this piece I used generative AI to write a first draft of the short teaser blurb that went out by email.(The featured image, on the other hand, was made by combining icons each under creative commons license.) Want to know more? Send me an email and we can chat!