Exceptional EA

Use GenAI if it’s approved within your workplace, yet also use your noggin

February 1, 2024

You likely have a lot on your plate. You of course need to be on top of meetings and minutes, calendars and perhaps projects. You also need to manage timelines, expectations, relationships and yourself.

On top of all this, and other aspects of your role, it’s important to be aware of how to capitalize on generative AI (GenAI) in addition to other forms of AI. If you’re using ChatGPT or similar resources, including image generators such as Bing’s, you want to think about reputation management.

A public reputational hit

We’ve learned a lawyer is alleged to have used ChatGPTin preparing for a Vancouver court case, and the resulting case law she presented during a court hearing was not real; generative artificial intelligence (GenAI) produced non-existent case law.

We’re given to understand the lawyer, whose standing within her profession is yet to be determined as I write this, told the court she was “not aware the artificial intelligence chatbot was unreliable, and she did not check to see if the cases actually existed”. I believe I was the first person to write about ChatGPT, Bard and GenAI for assistants, as far back as my December 2022 newsletters. Even as I encouraged assistants to explore and assess use of ChatGPT in the career, I cautioned on this website  more than a year ago  that GenAI conveys both valid and inaccurate statements with equal confidence. You may also have read my ChatGPT cover article for the March 2023 issue of Executive Support Magazine.

Head to my LinkedIn post to hear a cautionary tale

Click here to see my video and cautionary tale on LinkedIn and be prepared to do your homework. Use that fine mind you’ve been given.

Verify. Yes, we should capitalize on technology as appropriate within our respective work environments. We should also verify what we read. Proofreading entails checking more than grammar, spelling, context and presentation. We also need to ensure our deliverables are accurate. In the case of GenAI, the onus is on you to fact check and verify whether or not the information is accurate.

Look closely at images you generate. When using GenAI image generators, you’ll also want to look closely at the end result before inserting it in a document or presentation. I use a combination of my own photography and other resources in designing my presentations, and some of the GenAI images resulting from my instructions have been laughable. Others demonstrate bias, include nonsensical language or misspelled words, or have been inappropriate in that they depicted parts of the anatomy not typically displayed in public.

In one instance, I’d instructed the software to depict two people conversing on a tennis court, with only one of the players holding a racquet. The resulting image did depict two fairly realistic looking people in front of a tennis net, in the broader context of a tennis court within a leafy green park. The two people may have represented 15% of the overall image, and it was only when I went to download the image that I noticed the appendage.

If I had vision challenges or simply hadn’t clued in, and had embedded that image in my presentation, what reputational impact do you think it would have had when I spoke in Rotterdam last December? The image that filled my laptop screen would have been enhanced many times on the screen in our lovely nhow Rotterdam hotel meeting room.

Watch for bias. How do assistants support inclusion and belonging in the face of AI biases, whether you’re preparing a document or creating images for a PowerPoint presentation? Using AI of any kind should not imply blind trust or faith in what’s produced. Be thoughtful in how you phrase questions or instructions, and examine the output prior to using it.

Use GenAI wisely, and refrain from blind trust

Reputational impacts for you, your colleague(s) and your organisation There are multiple ways in which we can take advantage of GenAI. We do, however, want to avoid blind trust in its output. As an assistant, how would you feel learning you’d sent your colleague into a meeting with an image of a somewhat exposed individual in their slide deck? How would your leadership team or board members respond to learning materials they received included inaccurate or biased statements? What kind of fallout might there be if some incorrect or misleading GenAI-produced content made its way into a media release, a quarterly or annual report, or other significant material intended for internal or external audiences?

Some workplaces prohibit use of GenAI

You may be in a workplace that prohibits use of GenAI, and you want to ensure your practices align with your employer’s expectations.

If you’ve attended a risk management session with me, you’ll know senior executives and boards will ideally have a healthy focus on assessing various risks that can come with opportunities. GenAI certainly represents an opportunity, albeit one that can come with bias, compliance, cyber, intellectual property (IP), liability, reputational and third party risks. In determining policy or practice, the people atop your org chart will also ideally have assessed the merits and risks associated with incorporating GenAI in your workplace.

You can tap in to your own resources to ensure your currency

Even if there’s an expectation you will not use GenAI within your role, it can be beneficial to tap into this resource on your personal time, hardware and through your own internet service provider. It’s important to maintain skills currency, whether or not you envision leaving your current role or anticipate policy changes within your work environment. If you’ve not yet tried GenAI, you may want to begin with an account with free access before determining if there’s value in upgrading to a paid subscription plan.

Canada’s Federal Court has banned its judges from using AI in decisions without first engaging in public consultation

The recent Vancouver court case is not specific to an assistant’s career, yet it highlights how we’re in somewhat of a “wild west” stage as individuals, employers and institutions determine whether and how to rely on GenAI. Nor is the Vancouver court case isolated. With knowledge of court cases in the US and other jurisdictions having been impacted by use of GenAI-/machine-learning resources, our country’s Federal Court has committed that its judges are to refrain from using AI in decision making prior to public consultation.

Requirement for transparency, in the form of a prominent declaration of AI-generated AI content In December 2023, Canada’s Federal Court published a notice including a requirement for a “Declaration for AI-Generated Content”. The Declaration must be provided in the first paragraph of any litigation-related document that includes AI-generated content and is prepared for submission to the Court. The Court has acknowledged AI’s “potential for considerable benefits” as well as risks “to the independence of judges and public confidence in the justice system”. It identified additional risks all of us should consider, including AI “hallucinations”, bias and underlying algorithms. The Court provided the following as an example of its expectations for such Declarations.

Guidance from Canada’s Federal Court:

“DECLARATION

Artificial intelligence (AI) was used to generate content in this document.”

Keeping a “human in the loop” In its additional publication, Interim Principles and Guidelines on the Use of Artificial Intelligence, the Federal Court articulated the principle of having a “Human in the Loop”. It stipulated “… the need to verify the results of any AI-generated outputs that they may be inclined to use in their work.” I used the term “verify” in my LinkedIn video before reading any of the Federal Court’s documentation on this matter.

Principles we should consider While a small percentage of readers will be engaged in court-related matters, it’s worth considering the Federal Court’s principles. We can apply them in our own use of GenAI. These principles include accountability, accuracy, cybersecurity, the aforementioned “human in the loop”, non-discrimination, respect of fundamental rights, and transparency.

I wrote this myself

When I sent my March 2023 cover article to Executive Support Magazine Editor Kathleen Drum and Publisher Lucy Brazier, I began with a somewhat cheeky statement letting people know a human – yours truly – wrote the article. As I close this piece, I’m glad to say I wrote this myself and conducted my own independent research. I did, however, use GenAI to create three of the above images.

Do you use ChatGPT?

I’ve been writing about it since 2022 and you can click here for one of a series of articles I’ve written encouraging readers to explore and play with ChatGPT to assess how this it can impact your career.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from EXCEPTIONAL EA: home

Subscribe now to keep reading and get access to the full archive.

Continue reading