Top 3 Barriers Companies Face When Adopting Artificial Intelligence

Artificial intelligence is currently the most powerful digital transformation technology that can significantly improve business efficiency. But when implementing AI tools, companies often face a number of barriers that not all managers are able to overcome due to fears, stereotypes, or associated risks.

Nick Spirin, CEO of Neuroinfra and author of the Netology Business Transformation: Implementing Artificial Intelligence course, talks about some of these barriers and gives advice on how to overcome them using real-life examples of company transformation.

Artificial Intelligence makes it possible to automate more business processes than previously available technologies, replacing even complexly formalized employee decisions with smart algorithms. It helps to optimize processes within the existing business model, as well as launch qualitatively new ones.

For example, MTS reduced the costs of the call center by 80% due to the processing of incoming requests by the AI ​​chatbot, Magnit reduced the shortage of products in stores by 2% (revenue growth to 55 Millions $ per year) thanks to neural networks, Superjob increased the number of closed vacancies by 5.4% using personalization of responses from candidates in the recruiter’s personal account, and Yandex launched a smart speaker and voice assistant Alice, combining several previously unavailable Artificial Intelligence capabilities in one product.

However, companies looking to take advantage of the benefits of artificial intelligence face a number of barriers.

Resistance from employees

While artificial intelligence can improve the efficiency and transparency of business processes and solutions, it may not always attract employees.

So, automation with the help of AI is perceived by society as a technology that makes people unnecessary, deprives them of their profession. This is not true. Contrary to the popular belief that AI will surpass humans, in my opinion, based on my personal experience of more than 50 AI projects, hybrid intelligence is the most likely trajectory for the development of AI and humanity. In this case, man and machine work in symbiosis, complementing each other. AI replaces routine operations, and humans are engaged in creative activities.

“For example, when selling the NeuroRetouch AI-powered retouching product for retail and manufacturers, we constantly face resistance from retouchers’ fears of losing their jobs.

Moreover, despite the fact that the solution allows an order of magnitude to reduce the cost of retouching, to process photos instantly and in any volume, the heads of the retouchers teams sometimes resist this, because they are afraid of losing their social capital and budget for the next year due to the optimization of their subject. business process.

Another example of negative behavior is the reluctance to implement BI (Business Intelligence) tools and basic analytics, as this introduces “extra” transparency – you have to make decisions based on data, and not “lazy” intuition.


The management and the HR team itself must develop and implement an employee motivation system in such a way that the career goals of employees coincide with the goals of the company and digital transformation, otherwise the situations described above will arise.

Employees should be able to present their successful project to a team of top managers in order to receive additional funding and develop the project within the company.

The culture of the company should be conducive to employees’ self-expression and creative thinking.

When interesting external solutions are found, the task of the digital leader is to ensure that these decisions are objectively considered by teams through A / B testing and other methods.

Lack of attention to cybersecurity

Even when the AI ​​processes and culture in the company are perfectly aligned, the standards are fixed, the employees are motivated, there is always the risk of force majeure. And such force majeure includes hacker attacks and data leakage.

According to the Identity Theft Report Center, the number of leaks in 2018 reached 1,244 and affected 64.4 million credit cards. Among the victims are such well-known companies as Home Depot (2014), Equifax (2017), Capital One (2019).

First of all, leaks negatively affect the reputation of companies – according to KPMG’s analysis, after the leak, 19% of Home Depot customers said they would end their relationship with the company. Do not forget that companies must pay multi-million-dollar compensation to affected customers, payment systems and banks.

“It turns out that reputational and financial losses force companies to be too careful, which, in turn, slows down the transformation in general.

So, after the leak, it took one of our clients from the financial sector more than 6 months to provide access to minimally useful anonymized data. Another client’s security service denied access to our external team of consultants even without a leak, and we were forced to develop analysis algorithms based on synthetic data! And after the leak, government regulators can impose special restrictions on the circle of people and their level of access to data, which will complicate the process of forming AI teams.


Companies must continually invest in cybersecurity by conducting vulnerability analyzes on customer products and enterprise systems.

Moreover, it is worth introducing standards for data access, limiting it, following the principle of “the minimum possible authority to complete the task, but without prejudice to the workflow,” and write audit logs on data access in case of a leak.

Companies are not ready for the peculiarities of AI projects

Despite the advantages over traditional software, AI testing is orders of magnitude more difficult. In addition to traditional tests, such as unit tests, integration tests, scaling and fault tolerance tests, AI testing includes a statistical assessment of the quality of a machine learning model on test data, exploratory use case analysis to identify bias (bias, editor’s note) of data sampling, interpretation of decision-making logic.

“In addition, companies need to think about ways to protect their AI systems from intruders who are constantly trying to find blind spots in the logic of the AI ​​and the data that was used to train it.

This makes the AI ​​model QA team one of the most important in the organization. Unfortunately, today there are very few companies that truly understand this. So, even Tesla, which has managed to assemble a first-class AI team, has situations where AI flaws lead to negative consequences. As a result, customers are more cautious about the company’s products, which hinders business growth and indirectly slows down the adoption of AI, as businesses collect less data and feedback from users.


The company must establish clear acceptance criteria and testing standards for intelligent systems to understand the scope of AI at any given time and detect defects early in product development.

Defects found during testing can be either blocking or acceptable. In the latter case, it is important to convey these features to customers.

Non-critical defects of AI algorithms can be compensated for by special interface solutions, for example, by showing contextual hints.

In summary, we will add that despite all the features of projects based on artificial intelligence, companies today should start testing AI solutions and platforms as early as possible. The difficulties and barriers that arise in the process are typical for most transformation projects, and the benefits that AI brings fully cover these difficulties in the long term and with the correct formation of the AI ​​implementation strategy and project portfolio.

We hope that the recommendations we have listed will help companies move faster and successfully overcome barriers to innovation.

Related Articles


Please enter your comment!
Please enter your name here

Stay Connected


Latest Articles