You need to stop doing this on your AI projects


It’s easy to get excited about AI projects. Especially when you hear about all the amazing things people are doing with AI, from conversational and natural language processing (NLP) systems to image recognition, autonomous systems and powerful predictive analytics and pattern and anomaly detection capabilities. However, when people get excited about AI projects, they tend to ignore some important red flags. It is these red flags that cause over 80% of AI projects to fail.

One of the biggest reasons why AI projects fail is that companies do not justify the use of AI from a return on investment (ROI) perspective. In short, given the cost, complexity, and difficulty of implementing AI systems, they are not worth the time and expense.

Organizations rush through the exploratory phase of AI adoption, jumping from a simple proof-of-concept “demonstration” right to production, without first evaluating whether the solution will provide any positive returns. A big reason for this is that measuring the ROI of AI projects can be more difficult than initially anticipated. Teams are often under pressure from top management, colleagues or external teams to just start their AI work and the project moves forward without a clear answer to the problem they are actually trying to solve or the ROI they will see Advance. When companies strive to clearly understand the ROI of AI, deviations from expectations are always the result.

Missing and misplaced ROI expectations

So what happens when the ROI of an AI project doesn’t align with management’s expectations? One of the most common reasons why AI projects fail is that the ROI does not match the investment of money, resources, and time. If you’re going to spend time, effort, human resources, and money implementing an AI system, you want a clear positive return.

Worse than a misplaced ROI, many organizations do not even measure or quantify the ROI in the first place. ROI can be measured in many ways from financial return, such as generating revenue or reducing expenses, but it can also be measured in time return, shifting or reallocating critical resources, improving reliability and safety, reducing errors, and improving quality control or Improve security and compliance. It’s easy to see how an AI project can provide a positive ROI if you spend $100,000 on an AI project to eliminate $2 million in potential cost or liability, then every dollar spent on reducing liability is worth it of. But you can only see that ROI if you really plan ahead and manage that ROI.

Management guru Peter Drucker once famously said, “You can’t manage what you don’t measure.” The act of measuring and managing AI ROI separates those who see positive value from AI with those who Differentiate between people who end up canceling projects for years and investing millions of dollars.

Boil the ocean and bite more than you can chew

Another big reason companies don’t see the ROI they expect is because projects try to eat too much at once.Iterative, agile best practices, especially those Best practice AI methods such as CPMAI Explicitly advise project owners to “think the big picture”. Start small. Iterate often. Unfortunately, there are many unsuccessful AI implementations that take the opposite approach, thinking big, starting big, and iterating infrequently. A prime example is Walmart’s approach to AI robots for inventory management Invest. In 2017, Walmart invested in robots scanning store shelves, and by 2022, they’re taking them out of stores.

Clearly, Walmart has enough resources and smart people. So you can’t blame bad guys or bad technology for their failures. Instead, the main problem is a poor solution to the problem. Walmart realized it would be cheaper and easier to use the human employees they already had in the store to do the same tasks that the robots were supposed to do. Another example of a project not returning the expected results is the Pepper robot’s various applications in supermarkets, museums, and tourist areas. Better people or better technology will not solve this problem. And just a better way to manage and evaluate AI projects. Methodology, folks.

Take a step-by-step approach to running AI and machine learning projects

Are these companies caught in the tech hype? Are these companies just letting robots roam the halls for the “cool” factor? Because cool doesn’t solve any real business problem, nor does it solve pain points. Don’t do AI for AI’s sake. If you’re just doing AI for AI’s sake, don’t be surprised when you don’t have a positive ROI.

So, what can companies do to ensure a positive ROI for their projects? First, stop implementing AI projects for AI’s sake. Successful companies are taking a step-by-step approach to running AI and machine learning projects. As mentioned earlier, methodology is often the missing recipe for a successful AI project.Organizations are now seeing the benefits of adopting the following approach Cognitive Project Management for Artificial Intelligence (CPMAI) approach, Building on decades-old data-centric project methodologies such as CRISP-DM and incorporating established best-practice agile methodologies to deliver short, iterative sprints to projects.

These approaches all start with business users and needs. The first step in CRISP-DM, CPMAI, or even Agile is figuring out if you should move forward with an AI project. These methods suggest alternatives, such as automation or direct programming, and even more humans may be better suited to the problem at hand.

“AI Go No Go” Analysis

If AI is the right solution, then you need to make sure you answer “yes” to a variety of different questions to assess whether you are ready to start your AI project. The set of questions needed to determine whether to move forward with an AI project is called an “AI Go No Go” analysis, which is part of the first phase of the CPMAI approach. The “AI Go No Go” analysis asked users to ask a series of nine questions in three general categories. For an AI project to really move forward, you need to align on three things: business viability, data viability, and technical/execution viability. The first of the three general categories asks about business viability and asks if you have a clear problem definition, whether the organization is really willing to invest after creating this change, and whether there is sufficient ROI or impact.

These seem to be very basic questions, but these very simple questions are often skipped. The second set of questions concerns data, including data quality, data quantity, and data access considerations. The third set of questions is related to implementation, including whether you have the right team and required skill set, whether you can execute the model as needed, and whether you can use the model where it is planned.

The hardest part of asking these questions is answering them honestly. Honesty is important when deciding whether to move forward with a project, if you answered “no” to one or more of these questions, it means you’re not ready to move forward, or you shouldn’t be moving forward at all. Don’t just keep trying , because if you do, don’t be surprised when you waste a lot of time, energy, and resources and don’t get the ROI you hoped for.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *