🤖 Day 2: Read an introductory article on AI in testing and share it

It’s Day 2 of 30 Days of AI in Testing! Yesterday, we introduced ourselves and shared our AI aspirations. Today, we’re diving into the basics of AI in software testing by sharing useful articles with eachother.

Day 2 Task:

For today’s task, you’re challenged to find, read and share your key takeaways on an introductory article on AI in software testing. This could cover the basics of AI, its applications in testing, or even specific techniques like machine learning for test automation.

Task Steps:

  1. Look for an article that introduces AI in software testing. It could be a guide, a blog post, or a case study—anything that you find interesting and informative.
  2. Summarise the main takeaways from the article. What are the essential concepts, tools, or methodologies discussed?
  3. Consider how the insights from the article apply to your testing context. Do you see potential uses for AI in your projects? What are the challenges or opportunities?
  4. Share your findings by replying to this topic with a summary of your chosen article and your personal reflections. Link to the resource (if applicable).
  5. Bonus step! Read through the contributions from others. Feel free to ask questions, provide feedback, or express your appreciation for insightful findings with a :heart:

Why Take Part

  • Expand Your Understanding: Getting to grips with the basics of AI in testing is crucial for integrating these technologies into our work effectively.
  • Inspire and Be Inspired: Sharing and discussing articles introduces us to a variety of perspectives and applications we might not have considered.
  • Save Time: Benefit from the collective research of the community to discover valuable resources and insights more efficiently.
  • Build Your Network: Engaging with others’ posts helps strengthen connections within our community, fostering a supportive learning environment.

EDIT: We encourage you to go deep and find posts from independent authors. There are plenty of companies writing articles on AI. Let’s amplify individual members who have written an introductory article.

:mortar_board: Support your learning and the community. Go Pro!


AI in Software Testing:
Revolutionizing the Testing Landscape Artificial intelligence (AI) is transforming software testing by introducing innovative approaches and solutions. AI techniques, including machine learning and natural language processing, enable automated test case generation, intelligent bug detection, performance optimization, and test data analysis.
Benefits of AI in Software Testing:

  1. Enhanced Test Coverage: AI automates test case generation, ensuring comprehensive coverage.
  2. Improved Accuracy: AI algorithms identify anomalies and predict defects, leading to increased test accuracy.
  3. Reduced Time and Cost: AI optimizes regression testing and reduces manual effort, saving time and resources.
  4. Faster Software Delivery: AI integrates with CI/CD pipelines for early defect detection and rapid feedback, speeding up software releases.
  5. Enhanced User Experience: AI simulates user interactions and identifies usability issues, improving the user experience.

By leveraging AI in software testing, organizations can improve the quality, reliability, and efficiency of their software products.

Link: AI In Software Testing — How can AI be used in software testing? | by Testsigma Inc. | Medium


AI Testing: The Future of Software Testing
This article provides a comprehensive overview of AI in Software Testing. The key takeaways are:
Benefits: AI can automate test creation, generate test data, maintain tests efficiently, enhance visual testing & identify patterns/anomalies in large datasets that might be missed by humans.
Essential concepts: The article explains how AI leverages machine learning algorithms to learn from data, make decisions based on patterns & improve performance over time. It also touches upon AI-powered tools like test case generation and execution platforms.
Applying AI to my testing context:
By automating repetitive tasks, improving test coverage, learning from past tests which would lead to more efficient testing process.

Overall, AI :automator: presents exciting opportunities to revolutionize software testing by making it faster, more efficient, and more effective.

Article: https://katalon.com/resources-center/blog/ai-testing


Hello @simon_tomes :wave:

Here’ the article : https://www.browserstack.com/low-code-automation/features/what-is-ai-testing

And below are the key :key: takeaways -

Introduction to AI Testing:
The article defines AI testing as the use of artificial intelligence and machine learning algorithms to automate various aspects of the software testing process.

Applications of AI in Testing: It discusses how AI can be used for tasks such as test case generation, test execution, test data management, defect prediction, and log analysis.

Benefits of AI Testing: The article highlights the benefits of AI testing, including improved test coverage, faster test execution, reduced manual effort, and enhanced accuracy in defect detection.

• Challenges and Considerations: It also addresses challenges such as the need for high-quality training data, model interpretability, and potential biases in AI algorithms. Additionally, it emphasizes the importance of human intervention and validation in AI testing.

• Future Outlook: The article concludes by discussing the future of AI testing and the potential impact of advancements in AI technologies on testing practices and methodologies.

Personal Reflections:

• The article provides a comprehensive overview of AI testing and its potential applications in software testing.

• I see several potential uses for AI in my testing projects, particularly in automating repetitive tasks, optimizing test coverage, and improving defect detection.

• However, I also recognize the challenges associated with AI testing, such as data quality, model interpretability, and the need for human validation.

• Overall, the article has deepened my understanding of AI testing and has sparked my interest in exploring AI-driven testing solutions further to enhance my testing practices.


Getting into the spirit of things, I got ChatGPT to generate me a list of articles. I picked this one at random.

Revolutionizing Software Testing: The Power of AI in Action

This article was a basic overview of the various types of AI and how they could be leveraged at different stages of the STLC. AI has applications through the entire process from requirements right through to test closure.
Whilst there are many benefits of AI, particularly in respect to productivity, there are also drawbacks including the high initial cost and ongoing maintenance challenges.


A relatively short article giving a broad introduction to the subject, nevertheless I found it very thought provoking. I was particularly surprised the number of different areas that AI can be applied to.

I like how the article positions AI as an aid to manual and automated testers, not something that replaces them.

Having read a broad overview, I now have many questions that I’m hoping 30 Days of AI in Testing will help me answer. These include:-

  • Where do I start? There are so many possibilities, how do I work out which applications will deliver the most value quickly?

  • Technically, which tools will work in our particular tech stack?

  • What process changes will we need to ensure our AI implementation works for us?

  • How much of a learning curve is there for each of my team members?

  • How can we mitigate the drawbacks?

  • Is AI really as good as it is being portrayed or is it largely hype?

  • Given the high initial cost of implementing AI, how much of a risk is it?


Hello @simon_tomes and fellow testers!
A great quick read on how AI can, and probably will be involved in the day to day testing. Took some notes

*AI can analyze test results and determine the root cause of software failures, providing insights into how to improve the quality of the software being tested.
I initially thought this was quite a bold statement, on reflection all the AI will be doing is giving the tester and devs options to think about and its up to the Tester/Dev.

AI in software testing refers to the creation of intelligent algorithms that can test software applications without requiring human intervention.
Not sure I can entirely agree that “Without human intervention” algorithms need to be maintained. There is still going to be a human intervention somewhere.

Anomaly Detection: AI algorithms can analyse test results and identify unusual patterns or unexpected behaviors that might indicate defects or vulnerabilities in the software.
When doing exploratory testing the biggest challenge is to record and noted down everything you doing, I would love an AI tool that sits in the background and can report on everything that has been done.

Predictive Analytics: AI can predict which parts of the software are more likely to have defects based on historical data. This helps testers allocate resources more effectively and concentrate testing efforts where they are most needed.

Good testers know that more complex parts of the application will yield the most issues. AI tools will help in this process.

Test Data Generation: AI can generate diverse and realistic test data that covers different scenarios, ensuring thorough testing of various conditions.

One of my big gripes in software testing is having to find data in test environments. It is one area that I can see a massive benefit of AI is I will be able to generate good data and bad data to throw at the system.

Automated Bug Triaging: AI can analyze incoming bug reports, categorize them, and assign them to the appropriate developer or team for resolution.

Ahh the Defect Meetings, no more arguing about what is a bug and what is not a bug and when its going to be fixed…

Overall everything in this is what I would like to see implemented, I am not sure a lot of “mature” organisations are quite ready for everything mentioned in this article, but that is another debate to have!



I found this article related to the benefits of AI in Software Testing.

In conclusion, the integration of AI into software testing emerges as a game-changer, ushering in unprecedented efficiency, accuracy, and agility to the testing landscape. The article illuminates how AI accelerates testing timelines, ensuring swift and precise identification of flaws, while simultaneously freeing up human resources for more strategic tasks. The adaptability of AI bots stands out, seamlessly handling expanding code volumes and adeptly discerning between new features and potential pitfalls resulting from changes.

Furthermore, AI’s ability to comprehend client needs, streamline test automation, and enhance regression testing showcases its versatility and indispensability in the software testing domain. The cost-effectiveness of AI-driven testing systems, eliminating repetitive manual tasks without incurring additional expenses, adds another layer of appeal. Lastly, the comprehensive coverage achieved by AI in scrutinizing diverse aspects of software functionality, from memory to UI elements, underscores its pivotal role in elevating overall product quality.

As the article foresees the continued evolution of AI in software testing, it paints a compelling picture of a future where agile, precise, and AI-empowered testing methodologies redefine the very essence of quality assurance and software excellence. In an era of rapid technological advancement, the synergy between AI and traditional testing approaches promises to be the cornerstone for staying competitive and ensuring the seamless delivery of cutting-edge software solutions.


I think there are two main paths to take; one is to supplement knowledge and one is to supplant knowledge. I have read other articles and posts where Co-Pilot and ChatGPT can actually hamper the development of new devs because they are using code they don’t understand. There is a discipline here in understanding the outputs given as well as resisting the urge to continue asking a LLM questions until you get the answer you want.

Better automation

As mentioned above, QA’s main job is to make sure new code doesn’t break functional code. More features mean more code to test, which can overwhelm QA engineers.

  • AI bots can adapt to code changes.
  • They adapt and identify new functions.
  • AI bots can be programmed in a programming language to identify code changes as new features or bugs.
  • Built platforms can improve automated testing.
  • Change detection is improving in visual testing AI.

AI’s Transformative Role in Software Testing


Hello Folks!

Here are the answers to my task steps for Day 2:

  1. Take a look at an article → I checked out this article AI In Software Testing - The QA Lead

  2. Main Takeaways from the article:

  • AI is an invaluable tool. Like a quantum leap.
  • It has its use cases in almost all aspects of testing / testing workflows.
  • Transition to actually enabling AI in testing would need teams to upskill and design strategy considering AI.
  1. Potential, Challenges & Opportunities:
  • Potential: In Test Ideas, Test Data, Test Healing, Scripting, Result Analysis
  • Challenges: Most exisitng data on testing is not reliable. The results would also be unreliable in most cases and needs to be re-re-re-verified.
  1. Here is my summary / key takeaway:

Btw, this is a video blog that I just made on our today’s activity: |Rahul’s Testing Titbits| Day 2 of 30 Days of AI in Testing - What is AI in Testing? - YouTube

Do share your feedback on this too :slight_smile:

Thanks! Looking forward to feedback and thoughts from fellow members.


This is wonderful @parwalrahul !! great representation.

  1. Take a look at an article → I checked out this article AppAgent: Multimodal Agents as Smartphone Users

  2. Main Takeaways from the article

  • This paper presents a novel Large Language Model (LLM)-based multimodal agent framework designed to manipulate smartphone applications. The framework extends its applicability to a wide range of applications by enabling the agent to mimic human interaction behaviors such as taps and swipes through a simplified action space that does not require system back-end access. The core functionality of the agent is its innovative learning approach, which allows it to learn how to navigate and use new applications through autonomous exploration or by observing human demonstrations. This process generates a knowledge base that the agent refers to when performing complex tasks in different applications.

  • The paper also discusses work related to large-scale language models, specifically GPT-4 with integrated visual capabilities, which allows the model to process and interpret visual information. In addition, the performance of the agent was tested across 50 tasks across 10 different applications, including social media, email, maps, shopping, and complex image editing tools. The results confirm the agent’s proficiency in handling a wide range of advanced tasks.

  • In the methodology section, the rationale behind this multimodal agent framework is described in detail, including a description of the experimental environment and action space, as well as the process of the exploration phase and deployment phase. In the exploration phase, the agent learns the functions and features of the smartphone application through trial and error. In the deployment phase, the agent performs advanced tasks based on its accumulated experience.

  • The paper concludes with a discussion of the agent’s limitations, i.e., the lack of support for advanced controls such as multi-touch and irregular gestures, which may limit the applicability of the agent in certain challenging scenarios. Nonetheless, the authors see this as a direction for future research and development.

  1. Potential:New UI automation test scripting approach and concepts for mobile, Self-exploration and imitation of manual steps, Multi-model support, you can select and switch models according to the actual situation of your app.

  2. Challenges: You need the agent to be familiar with your mobile app, and you also need to feed the agent enough scenarios.

  3. Here is my personal reflections:

Project link: https://github.com/mnotgod96/AppAgent

I think it can be used to do exploratory testing of mobile apps, by giving the existing test cases as a knowledge base, learning and exploring through AppAgent to expand the test scenarios and improve the real and effective test scenarios.


This is not exactly following the instructions here, because this is not an article but a video course, and it is not about AI in software testing but AI in general. But here goes:

Course link: Introduction to Artificial Intelligence on LinkedIn Learning

  • Strong vs weak AI. Strong AI displays all behaviors of human - few years ago we were very far from creating anything like this, and probably still are. Weak AI might display behavior of human - and often be better than human - in some narrowly defined task, like playing chess, deciding if loan should be granted, or buying and selling stocks.
  • The idea of machine learning was created back in 1950s. In general, this is about creating algorithms that can learn and improve on their own. Many machine learning systems are implemented through artificial neural networks. The idea is that you have input (that you have full control over), there is output (which you can observe and judge), but between the two there are layers of neurons. Layers are effectively black box for us.
  • Machines need much more data than humans to learn. Many of the problems of AI in the past boiled down to insufficient processing power and insufficient amount of data.
  • There are two main ways to teach a machine - you can give it labeled data (i.e. pictures, each with a note “this is a cat” or “this is a dog”) and let it figure out what labeled instances have in common; or you can give it raw data, tell it to decide if this is a cat or dog, and then tell it if it was right or wrong. Both ways can be used at the same time.
  • Common machine learning algorithms are K-nearest neighbor, K-means clustering, regression and naive Bayes. All of them are statistical techniques. The course does not answer the question if this is what AIs are actually doing under the hood, or just useful ways of thinking about AI. All these techniques are around for decades, and some pre-date computers.
  • K-nearest neighbor is multiclass classification. Example: let’s say you have hundred dogs of different breeds. You can plot them on a chart, like their body weight, hair length etc. When new dog arrives, you don’t know the breed, but you can measure body weight etc. Then K-nearest neighbor can be used to put the dog on the same chart, closest to any K other dogs. If K is 5, and 4 of nearest neighbors of new dog are dachshunds, then you can assume that this new dog is also dachshund.
  • K-means clustering is similar, except that you specify that you want to classify all dogs into K groups, and let machine figure out which other properties are best to create that grouping. The groups might not have much sense for a human, like one group could be brown dogs (of any size), and another could be heavy dogs (of any color).
  • Regression is for looking how close the variables are to each other, i.e. when change in one variable goes along with a change in another. This is one of the oldest and most robust statistical techniques. Not covered in course, but common mistakes are: regression does not mean causation (just because two things change in pair doesn’t mean that one is caused by another), beware of autocorrelation (when two variables are really measuring the same thing; often introduced after data is processed in various ways; here’s an article explaining that Dunning-Kruger effect does not exist and is just autocorrelation).
  • Naive Bayes is multiclass classification where each variable is considered independently and provides some probability to final decision. The course skims over details to greater extent than when discussing other techniques.
  • When selecting and training the model, beware of underfitting and overfitting. Both are examples where model worked well enough in the lab, but failed in real-world application. Underfitting is when model ignored some of important variables. Overfitting is when model assigned importance to variables that are not really important in the decision.
  • Two performance metrics given are accuracy and reliability. Accuracy is how close the model was in its decision. Reliability is how different are the answers to each other (natural variance). Not linked in the course, but this website provides visual explanation. In practice there are many more performance metrics you can use. Usually there are tradeoffs between them - as you improve one, some other is going down.

“Hello, World! :globe_with_meridians::wave:

Title: “AI Mobile Automation Testing: Fact or Fiction? :robot::iphone:Explore Now! :rocket::mag:

Author: Joe Colantonio

In Joe Colantonio’s captivating article, the fusion of AI and mobile automation testing takes center stage, promising a paradigm shift in the tech landscape. :star2::rocket: As technology evolves at breakneck speed, so do the challenges in ensuring flawless mobile app experiences across diverse platforms and devices. But fear not! Enter AI, the ultimate game-changer in the world of testing. :mechanical_arm::bulb:

Colantonio masterfully illustrates how AI-powered tools navigate the labyrinth of mobile testing complexities with unparalleled precision and efficiency. From handling myriad device configurations to optimizing user experiences, AI emerges as the undisputed hero of the testing realm. :muscle::bar_chart:

Through compelling real-world examples, Colantonio showcases how leading organizations harness the transformative power of AI to accelerate their testing processes, streamline workflows, and elevate product quality. :globe_with_meridians::chart_with_upwards_trend: But the journey doesn’t end here—it’s a glimpse into the future of mobile testing, where AI continues to shape a landscape of endless possibilities. :rocket::crystal_ball:

Join Colantonio on this exhilarating expedition into the world of AI-driven mobile testing. Read the full article here and embark on your journey to testing excellence! :newspaper::link::fire:

Verdict: :star2::iphone: After diving into the captivating world of AI in mobile automation testing, I’m thoroughly convinced of its transformative potential. The article sheds light on how AI-powered tools are revolutionizing testing practices, paving the way for more efficient and effective app development. :bulb::rocket: The real-world examples provided underscore the tangible benefits that AI brings to the table, from reducing testing time to enhancing user experiences. :briefcase::bar_chart: Overall, this article has left me inspired and eager to embrace AI as a cornerstone of my testing strategy. It’s clear that AI isn’t just a trend—it’s a game-changer that will shape the future of mobile testing for years to come. :clap::crystal_ball:

Connect with Manoj Kumar B :star2::man_office_worker:


Here is the article: Can AI make software testing a breeze?

The main takeaways from the article:

  1. AI as a Helper, Not a Replacement: While there was initial speculation about AI potentially replacing human roles in IT, the current trend suggests that AI serves more as an assistant rather than a substitute for human testers. It emphasizes that AI is still in its early stages and hasn’t reached its full potential yet.

  2. Current State of AI Testing: Despite the promise of AI in testing, there hasn’t been a significant breakthrough. AI-created unit tests may not always produce accurate results, and training machine learning models requires substantial time and resources, which might not be feasible for all businesses.

  3. Balanced Approach to Improving Testing with AI: The article advocates for a balanced approach to incorporating AI into testing practices. It suggests understanding the fundamentals of testing before utilizing AI and warns against hasty adoption without proper planning, which could lead to ineffective practices. AI can assist in automating certain aspects of testing, such as generating test cases based on database structures or offering solutions to simple testing problems.

  4. AI as a Learning Tool for New Testers: AI can be beneficial for beginners in the testing field by providing access to a wide range of information and methods. However, it’s noted that AI may not always direct learners to additional resources for deeper study, which could result in a superficial understanding. Nevertheless, AI encourages experimentation and introduces learners to different testing methodologies.

  5. Navigating the Future with AI: The article concludes by emphasizing the importance of viewing AI as a complement to human expertise rather than a replacement. It suggests fostering a collaborative relationship between humans and AI to leverage the strengths of both, ultimately leading to more efficient and innovative solutions in software testing.

Overall, the main takeaways from the article include the importance of understanding AI’s current capabilities and limitations, adopting a balanced approach to its integration into testing practices, and recognizing its potential as a learning tool for both experienced testers and newcomers in the field.


Hi fellow testers, I’ve chosen Learn How AI is Transforming Software Testing – QA Revolution at random.

Summarise the main takeaways from the article

This article states that AI will boost the efficiency and speed of testing and will help with the running of test automation. It then rather wildly states that AI will remove the need for assumptions, I disagree with this as that seems to imply the human element is entirely removed from testing which I think will always be essential at some part of the process. It then goes on to say that AI will help with visual validation and that it will help find more bugs and find them quicker. The article then goes on to list a couple of new test related jobs that it thinks will be created which are quite interesting.

Consider how the insights from the article apply to your testing context

It’s a bit too vague and handwaving for it to really be helpful to my context as it’s all forward looking and guessing how AI might help. It’s in defence the article was written in 2019 so was before the AI hype of the last few years.

Looking at the potential benefits it lists visual validation would be quite helpful as long as it didn’t constantly throw false positives and the ultimate goal of it finding more bugs and finding them quickly would be lovely.


You’ve provided insightful takeaways from the article on AI in software testing, highlighting its potential, challenges, and opportunities. Your summary demonstrates a clear understanding of the content and the implications for testing workflows. Sharing a video blog adds a dynamic element to your presentation, enhancing the engagement and depth of your exploration. Overall, well done!:wave:


This is bold claim that I hope comes true :joy:


I’d be interested in learning more about the biases that these models have.


Based on the provided blog post from Testim.io titled “Revolutionizing Software Testing: The Power of AI in Action”, here are the key takeaways and reflections on how AI can be integrated into software testing:

Key Takeaways:

  1. Definition and Scope of AI in Testing: AI in software testing refers to the simulation of human intelligence in machines to perform tasks that require cognitive functions such as analyzing data, making decisions, recognizing patterns, and learning from new information. This encompasses a wide range of applications across various industries, including software testing.

  2. Applications of AI in Software Testing: The article outlines several phases of the software testing lifecycle (STLC) where AI can be applied, including requirement analysis, test planning, test case creation, test environment setup, test case execution, and test closure. AI technologies, particularly Natural Language Processing (NLP) and machine learning, can automate and enhance these phases by analyzing requirements, identifying high-risk areas, generating test cases, optimizing test execution, and providing insightful test summaries.

  3. Benefits of AI in Testing: AI-powered testing offers numerous advantages, such as increased test coverage, improved testing efficiency, cost reduction, early defect detection, and higher quality test cases. These benefits stem from AI’s ability to learn and adapt over time, leading to more effective and efficient testing processes.

  4. Challenges of AI in Testing: Despite its advantages, the integration of AI into software testing comes with challenges, including the potential for bias in AI models, the need for substantial data for training, high initial costs, privacy concerns, and maintenance requirements. Additionally, AI cannot fully replace the intuition and experience of human testers.

Reflections and Application to Testing Context:

The insights from the article highlight the transformative potential of AI in software testing, offering ways to automate repetitive tasks, enhance test coverage, and improve the accuracy of test results. In my testing context, AI could be particularly useful in automating the generation of test cases from requirements documents, optimizing test execution order based on risk assessment, and self-healing tests to adapt to UI changes. These applications could significantly reduce manual effort and increase the efficiency of the testing process.

However, the successful integration of AI into our testing strategy would require careful planning, including selecting the right tools, preparing quality data for training AI models, and ensuring the privacy and security of test data. Additionally, we must be mindful of the limitations of AI and the importance of complementing AI-powered testing with human expertise to achieve the best outcomes.

Challenges and Opportunities:

The main challenge in adopting AI for software testing lies in the initial setup and training phase, which demands a significant investment in time and resources. However, the long-term benefits, such as reduced manual effort and improved test accuracy, present a compelling case for its adoption. Another challenge is ensuring the AI models are free from bias and accurately reflect the testing requirements.

The opportunities for using AI in software testing are vast, from automating tedious tasks to providing insights that can guide the testing strategy. As AI technology continues to evolve, its capabilities in software testing will likely expand, offering even more tools and methodologies to improve the quality and efficiency of software testing.

In conclusion, the integration of AI into software testing holds great promise for enhancing testing processes and outcomes. By carefully navigating the challenges and leveraging the strengths of AI, we can unlock new possibilities for making software testing more efficient, effective, and adaptive to changing requirements.



The article I will be referring to is: AI in Quality Assurance: From Manual to Autonomous Testing

The article revolves around manual testing in quality assurance and begins by mentioning some facts about some useful AI-only features:

  • NLP
  • The ability to learn and improve
  • The ability to capture and analyze visual data

It argues that QA is one of the least automated forms of testing, while it makes the point that the aforementioned features will resolve this offering greater testing speeds, reducing costs, increase test coverage and optimize processes (especially useful in performing large regression testing)

The article continues by arguing that LLMs, with their ability to understand human language, can aid with test authoring, enabling QA teams to compose comprehensive test cases. AI is said to be able and understand human written business requirements and with sufficient knowledge of the existing system, output comprehensive test cases with great coverage.

AI can also aid with eliminating human errors, commonly present in manual testing, reducing risk and ensuring consistency across the testing process. AI systems are not susceptible to fatigue, distractions or cognitive bias. At the same time, these systems can aid with streamlining processes in testing and accurate documentation.

The article also engages in an interesting conversation regarding the inefficiency of current automated testing techniques (which have been designed by humans) and argues that these systems, being deterministic are also susceptible to human error, while AI, involving systems can learn and evolve over time. (as a great example, consider a test case where you need to generate random data in .csv format)

Overall the article reviews all known AI features and studies them in the context of testing, bringing up some interesting key takeaways in the process.

I like their take, since it breaks the boundaries of how I currently view AI (an assistant) and presents artificial intelligence in a more active role. Although I believe all of the mentioned applications will require a lot of human intervention before they can actually produce meaningful results, I think that it is worth exploring for my team to see how we can leverage some of the offered advantages the article suggests.