Let’s start with an uncomfortable truth: resistance to innovation is not the fault of the individual. It is a “feature” of the system. Topicinnovation in the companyis key today.
Key Findings
- Innovation in the companychanges the rules of the game — companies must act now
- Data from McKinsey and Gartner confirm: early adopters grow 2-3x faster
- The key is to start with a pilot, not a big transformation
- Slovak companies lag behind by 2-3 years — the window of opportunity is closing
- Investment in AI returns within 18 months if deployed correctly
Why systems reject change even when the numbers scream “yes”
One of the best-documented barriers is status quo bias—the tendency to stick with the status quo even when the alternative seems more rational. In a classic paper, William Samuelson and Richard Zeckhauser showed that people disproportionately prefer “no change” in real-world decisions (health plans, elections, and other scenarios), and thus the system naturally resists change—even when change makes sense.
The second barrier is psychological safety: if people in the team do not feel that they can take risks (ask questions, admit a mistake, suggest “stupidity”), innovations will be stifled even before they arise. Amy Edmondson has empirically linked psychological safety to team learning and learning to performance improvement – without it, an organization learns slowly or not at all.
The third barrier is, paradoxically, “expertise”. Experienced experts may block innovation not because it is bad, but because it would require them to rewrite their own mental models and life’s work. This mechanism is called the “immunity to change” approach: beneath the surface of the declared goal (“we want to innovate”) there are often competing obligations (“I must not lose control, status, sense of competence”).
The fourth barrier is organizational: companies naturally rush into “exploitation” (using what works) and neglect “exploration” (searching for the new). James March has described this trade-off as the core of organizational learning – and the problem is that the short-term rewards of efficiency often outweigh the long-term rewards of experimentation.
And then there is the phenomenon that you put very accurately in the assignment: the culture of “not standing out”. In social psychology, the term tall poppy syndrome is also used for this – the tendency to throw down those who stand out and feel mischievous when they fall. It’s not “just a feeling”: there’s a line of research that maps people’s reactions to high performers and their failures.
Here comes the first great management paradox (and the first practical “aha moment”):
If you want innovation, you have to protect exceptions.Innovations arise at the edge of the average – in above-average talent, extreme curiosity and rapid iterations. The average is stable. Exceptionalism is fragile. And the system has a natural tendency to “cut down” that fragility.
At the strategy level, this is complemented by a second big thesis: large firms can stagnate when they focus too much on existing customers and existing metrics, thereby overlooking disruptive innovation. Clayton Christensen elaborates on this in The Innovator’s Dilemma and related texts on disruptive innovation.
And finally: competition. When a company does not feel the pressure of the opponent, its “innovation discipline” often decreases. Economic research shows that the relationship between competition and innovation is inverted U-shaped: both too little competition and too much competition can inhibit innovation, while “healthy pressure” promotes innovation.
Summary of the facts vs. interpretations (to remain critical):
Proven facts: status quo bias exists; psychological safety is related to team learning; organizations solve the tension exploration vs. exploitation; competition has a demonstrable relationship to innovation.
My synthesis (interpretation): “system by cutting off extremes kills innovation” is a practical description of how these scientific phenomena will manifest themselves in management – not a universal law. (It is even more important to measure reality in every company.)
Innovation in the company: Fear of AI: between real risks and media noise
AI today inspires a strange mixture of fascination and panic. And that panic often spreads faster than competence.
It is a fact that in 2023 there have been public calls to pause the training of the most powerful models – for example, the Future of Life Institute’s open letter “Pause Giant AI Experiments”. At the same time, however, part of the criticism claims that the dramatic framing of the “end of the world” can drown out the immediate, solvable risks (disinformation, bias, abuse, non-transparency) and that the motivations of the signatories were diverse.
Added to this were significant warnings from some authorities. Geoffrey Hinton, for example, publicly stated that he left Google to speak openly about the risks of AI. This is important to take seriously – but it’s equally important not to be paralyzed.
It is most beneficial for a manager to divide the “fear of AI” into two layers:
Layer A – real, manageable risks (processes win here).
These include in particular:
- unreliability of outputs (so-called hallucinations), i.e. convincing-sounding, but false answers;
- security and sensitive data;
- regulatory requirements and auditability;
- reputational risk (AI will make a mistake – and the company will look like it “slopped”).
There are very specific frameworks here. NIST has released the AI Risk Management Framework (AI RMF 1.0), which gives organizations practical language and practices for how to identify, measure and manage AI risks. The ISO/IEC 42001:2023 (AI management system) standard is also a global innovation in governance as a “management system” for AI – i.e. j. process approach, not just technical hacking.
And in Europe, legislation is also a reality: the AI Act (EU regulation) provides a risk-oriented framework and its application is introduced gradually (phasing of obligations). (Note: this is not legal advice; specific cases require a lawyer and compliance.)
Layer B – speculative, long-term risks (scenarios win here).
These include issues of superintelligence and “existential” risks, which are the subject of lively professional debate. There is no consensus on the exact probability or timing – there are significant differences of opinion among experts and even among research groups.
Critical thinking in practice means:not to displace or exaggerate. In the management world, what most often happens is not that AI will “rule the world tomorrow”, but that the company wastes 12-18 months either in panic or passivity – and the competition takes the market from it with “silent automation”.
Manager’s playbook: automations that really free up time
Here’s the gist of the article: winning in the AI era doesn’t mean “buying a tool.” It means to buildtrajectory– a series of small steps that reduce risk and increase return.
Why trajectory? Because big goals are demotivating if they are not broken down into manageable steps. Research on goals and motivation shows that goals work best when they are specific, measurable, and when there is a feedback mechanism; “implementation intentions” like “if X happens, I will do Y” also help.
AI Deployment Trajectory in Four Steps
Step 1: Choose a process, not a “department”.
Look for processes with three signs: high volume, repeatability, text/data. (Typically: reports, offers, meeting minutes, request processing, internal directives, FAQs, recruitment screenings, feedback analysis.) This selection is consistent with where research is already measuring AI productivity in real-world work conditions – especially text and knowledge tasks.
Step 2: Make a “dummy” within 10 days – not to make it perfect, but to make it measurable.
This is exactly the logic of the “corporate innovator from below” from your assignment. A demo overcomes skepticism better than a presentation. However, it is important to stand on transparency from the beginning: what is the input, what is the output, how can we verify the result. This is the foundation of trust that NIST also recommends in the AI risk management approach.
Step 3: Measure three metrics – time, quality, risk.
Practical minimum:
- time per unit (eg report / ticket / offer),
- quality (defect rate, complaints, internal customer satisfaction),
- risk (data leaks, unverified claims, legal implications).
This isn’t “red tape” – it’s a safeguard against an AI project ending up as hype without value (and becoming a reputational issue).
Step 4: Only then scale – and scale through governance.
If you have the ambition to make AI a part of everyday processes, you will need rules: who can use what, what data is allowed, what outputs must have citations/sources, who is responsible. This is exactly where the AI Act (high-risk cases, transparency, documentation) and ISO/IEC 42001 (management system) are headed.
Six “super tools” as management solutions (not as apps)
Instead of a list of app names (they change every three months), I listfunctions, which can be built using various tools – internally or with a partner.
AI as a first draft for texts that cost you hours.
Experiments show that Generative AI can increase typing speed and improve quality on “mid-level” tasks—especially for people who don’t have years of typing training.
AI assistant in customer support – empowers newcomers, standardizes answers.
In a real call center, they measured an increase in productivity after the introduction of a generative assistant: an average of about 14-15%, but significantly more for newcomers. This is exactly the type of impact that a manager can turn into a scalable process (faster training, lower turnover, more stable quality).
“Mathematics with words”: semantic maps (embeddings) for insight from data.
What you call semantic maps in the assignment is based on vector representations of words and sentences. Modern approaches to searching for similarities in texts, clustering customer requests or detecting topics in feedback grew out of this.
Automated knowledge compression: 12 action steps from 400 pages of documentation.
This is one of the fastest returns in management: experts’ time is expensive and knowledge “rots” in documents. AI can be used to summarize, extract decisions, create checklists and onboarding materials – with the caveat that critical claims must be verifiable. The problem of “hallucinations” is not a reason not to do it; it’s the reason to do it with the verification process.
RAG (retrieval-augmented generation): a corporate chatbot that can cite internal sources.
RAG is an architecture that combines document generation with document retrieval to increase factuality and give you “provenance” (where the information came from). In practice, this is one of the best compromises between speed of introduction and confidence in outputs.
Frequently Asked Questions
What does innovation in a company mean for Slovak companies?
Innovation in the company is a key topic for Slovak companies in 2026. The article analyzes specific data, trends and recommendations based on McKinsey, BCG and Gartner research. Leaders must act now to maintain a competitive edge.
How to implement innovation in the company in practice?
Implementing an innovation in a company requires a strategic approach — first an audit of the current state, then a pilot project and gradual scaling. The key is to involve the company’s management and build internal expertise.
What is the outlook for innovation in the company until 2027?
Trends show that innovation in the company will be an increasingly important topic. According to WEF and Gartner, the adoption of AI is expected to accelerate, regulations will tighten and the pressure for data-driven decision-making will increase. Companies that start acting now will get a 2-3 year head start.


