How long before using Ai feels normal, and we stop announcing we are doing it?

#💬 Quick Question

How long do you think it will be, before we get comfortable as a society using ChatBots and other Ai to assist us, especially in written communication, without the need to lead with a disclaimer? I still very much feel it's appropriate to make it explicit where I've used ChatGPT, and what contributions are human.

But if I start using it regularly for a few weeks, will I start to slip into a habit of using it without announcing it?

What if I'm using it for a months, or a year? In 5 years time, will it just be assumed any text could be generative, and then at best checked by a human after?

1 Like

I’m not sure when, but I’m sure that it will happen eventually, just as it was thought the history with adoption of other technological advancements. There were always people who resisted changes, but from what I’ve observed those who used new tools to make their work easier were at a bigger advantage.


Individually, perhaps how comfortable YOU feel is different to how big businesses are going to exploit it, and it would be naive to think AI is not already being for many applications, without us being told.

British Telecom announced it was cutting 55,000 jobs and replacing them with AI by 2030… a news story that surprised absolutely no one… so isn’t that a sign it’s already accepted and we’re comfortable as a society already?


These things going in cycles, we outsource, insource, do the hocky-cockey and turn around. I’ve already seen adverts for companies who are selling human support as an explicit benefit.

But yes, I appreciate I’m maybe already way too late to ask this question, although I was musing that I’m not comfortable myself using ChatGPT without declaring it, but I’m already thinking it might not be long.

1 Like

For me I’m kind of already used to it.
I’ve put myself to the task to use ChatGPT instead of google for a week and now I’m used to it.
Obviously not eveything what ChatGPT is correct and then you google it but it’s such a big inspiration for the small things, writing emails, documentation etc.

I would say that from a few years ago already. Now with the release of ChatGPT, it’s more of a ‘standard’ already here in Belgium.
I recently DM’d someone on linkedin about their post and they said “but this is what ChatGPT wrote” (the answer was fine btw)

I don’t think you need you need to announce it. There is no need to say ‘this was written by ChatGPT and validated by me afterwards’
As long as the content is legit and good, I don’t care who wrote it :stuck_out_tongue:

I guess this is what we call innovation. I recently also gave a presentation on how I use ChatGPT in software testing & for other use cases and I had so much positive feedback on it. People started using it more for their job and personal things.

For me, it’s already normal and I think we should not announce that we didn’t do it. You still wrote the prompt.

An example;

If you go to a restaurant with 2 or 3 stars, do you think the Chef who made the menu is actually in the kitchen? No, not always, He makes the menu and comes up with new things and decorates the plate. Doesn’t mean it isn’t “his” restaurant or “his” menu.

Same goes for AI prompts imho.


If you use ChatGPT to overcome a problem, and you then understand how ChatGPT did it and could solve the problem again without ChatGPT - is that not the same as say, looking it up on Google / Stack Overflow?

Would you declare to your peers that you needed to look up a how to solve the problem? Yes, and IMO, using ChatGPT is the same as researching a problem, no different to Googling. It’s just a much more focused answer you get back - sometimes it’ll hit a home run and you get the right answer first time.

Regardless of any ethics around announcing how you came to a solution to a problem, anything you get out of ChatGTP should be backed up by your own research anyway, so the chances are, even if you used ChatGTP to solve a problem, you used other research methods too.


This ^
So true! People are still scared of AI just like they are of security. They don’t understand it so they’ll badmouth it.


I think it is important to differ to basic approaches:

  1. Using an AI to let it write a text. e.g. “right a nice letter to request X”
    • I guess here will disclaimers will be less used. It is more an advanced auto correction / completion.
      • Maybe some new laws will enforce disclaimers here.
  2. Using an AI to get information, summaries, inspiration and send them someone. e.g. “what are good books about testing?”, “what are ideas to test Y?”
    • Here I see it like quotes, information from other source. I hope and guess that most people will note the sources.

To reference the meme with you guys get paid. Ask @mirza if you do not know it. I for one use it and do not tell that I am using it. Does that mean that I tell peole that I did this task and it was actually chatBLABLA that did it…no. I ask it stuff or ask it to do stuff when I do not want to write on my own or when I want to try stuff, but in the end I read and correct what I get.

As writen also by @kristof it knows things but the answers are not always usable. I asked it for fun if cypress can do ios testing and the answer was: yes…just install npm cypress-ios . :smiley: