Bamboo Agile | Custom Software Development Company
Bamboo Agile is an Estonia-based custom software development company that crafts bespoke solutions for telecom, education, healthcare, finances, other sectors.
We continue our series exploring the role of AI in software development. To briefly recap, in Part 1 and Part 2, we examined the technical benefits and limitations of AI adoption in detail.
In this part, we turn our attention to the social dimension. Drawing on research, media coverage, and expert commentary, we explore how tensions around AI are deepening divisions within the tech community – and how these shifts may impact development practices, the role of developers, and the industry’s broader future.
Introduction
As we’ve seen earlier, while optimism about AI may appear widespread in public discussions, the reality is far from unanimous.
At first glance, such a contrast in opinion might seem like a natural consequence of the lack of common standards for a rapidly evolving technology and may resemble the kind of engineering debates that often accompany emerging trends.
But this kind of framing risks underestimating the problem. A closer examination of discussions on forums, media coverage, and a range of industry reports has led us to reflect on a deeper, underlying tension – one that has been gradually building over the past few years and may evolve into a fault line within the global developer community, especially in the West.
According to a considerable number of respondents and experts, conflicts related to AI in software development extend beyond technical issues such as security, code quality, or integration. They touch on the very core of what it means to be a software engineer, raising questions about whether key roles and skills will retain their long-term value. Some developers openly express concerns about the future of their respective sectors and strongly criticize their companies’ approaches to implementing AI in day-to-day workflows – something felt particularly acutely in the gaming industry, for example.
So far, the impact of this emerging divide between proponents and skeptics of AI is most visible in leading tech and adjacent companies. It’s important to note, however, that these organizations play a central role in setting global standards for how technology is developed and managed. Moreover, their global influence and vested interests in AI increasingly turn even internal disagreements into matters of public concern – involving ethical, legal, academic, and socio-economic dimensions.
That’s why, in this article, we’ve chosen to examine not only the technical aspects of AI-driven development. We believe that as AI and related technologies reshape society, the industry will need to account for the possibility of a broader public response to its internal contradictions. That response, even if indirect, could have real implications for how AI-related processes are structured. And companies will have to factor that into their long-term strategies.
Considering AI but have doubts?
Contact us – we can turn early AI experiments into stable, production-ready software
Social friction around AI adoption and developers’ uncertain future
Although most respondents in individual surveys don’t believe that AI will replace skilled developers in the near future, there is nevertheless a noticeable undercurrent of distrust within the programming community toward the technology.
First, this stems from technical challenges discussed earlier – the widespread use of AI in software development still doesn’t feel fully optimized or secure, and even its benefits are often discussed with caveats.
Second, some industry reports and media coverage still fail to give developers full confidence that companies won’t eventually begin to downsize their engineering teams as AI capabilities grow. This concern is especially palpable in the U.S. tech sector, where IT unemployment has already increased in recent years (from 2.9% to 4.4% in 2022–2024 – The New York Times).
Third, even more than the hypothetical threat to jobs, what troubles developers right now is how forcefully some companies are pushing them to adopt AI and how quickly it’s reshaping their day-to-day work.
Thus, some Amazon developers, where in-house AI solutions are being rolled out at an accelerated pace following recent breakthroughs, have told reporters that their work now resembles something closer to warehouse jobs at Amazon. In their view, they’re now expected to work “faster and harder,” while software development itself is becoming “more routine and less thoughtful.”
Amazon leadership, on the other hand, touts the rollout as an unequivocal success, claiming that AI will relieve employees of tedious tasks and free up time for more meaningful work. They also stress the need to move quickly before competitors do. Similar statements have been made by Shopify leadership as well.
Commenting on this divide in a New York Times article, Harvard labor economist Dr. Lawrence Katz confirms: “There is a sense that the employer can pile on more stuff” without adding any new resources.
Negative Reddit comments examined during our sentiment analysis confirmed dissatisfaction among a significant share of developers (up to 30% of the 2,000 posts reviewed) with how their companies are approaching AI adoption or AI in general. While many acknowledge the benefits of AI, they don’t see them as guaranteed. Some view its broader rollout as hype, driven more by leadership’s desire for fast results or personal gain than by careful planning.
In particular, one post featured a developer complaining that AI was being pushed “on everyone” at their company. Another commenter suggested that managers were either chasing ROI over real outcomes or using the hype to attract mutual funds, concluding: “CEOs are compensated in stock and are rewarded for raising the stock price and not much else.”
Nevertheless, not everyone agreed with this assessment of executive motives, which sparked a debate. The post and the comment were later deleted.
Still, the “stock-market theory” came up more than once in that thread. Some commenters drew parallels to the dot-com boom of the late ‘90s, which ended in a market crash that wiped out many tech companies and severely shook investor confidence across the U.S. tech sector. Others echoed the sentiment, citing earlier waves of tech hype such as the big data trend from a decade ago.
Some users shared what they saw as outright absurd examples – like a manager refusing to hire support for a five-person team with only two engineers, citing AI’s capabilities as the reason. “Looking to leave as soon as possible,” the user concluded.
The comment sparked a lively response; most users supported the author, while some harshly criticized and mocked venture capitalists. In other threads, the backlash turned toward marketers. One user recalled a case where a guest speaker at a corporate event had an AI talking point inserted into his speech at the very last minute, much to his displeasure.
Another user, in a separate discussion, scceberscoo, admitted that he didn’t find AI nearly as interesting as others do, mainly because he believed it was harming software development. He also expressed confusion about colleagues who spent their weekends vibecoding with AI tools. Meanwhile, his company has launched “AI Solution Weeks,” during which every engineer is expected to deliver at least one AI-powered fix for a specific business problem. Despite being, in his words, “the best” at non-AI-related tasks on his team, he worries that the initiative could derail his career.
Many commenters backed the author, but some said they didn’t understand developers who, in their view, were becoming “dinosaurs” by rejecting new tools that could reduce routine work and help see the bigger picture. This triggered a mixed reaction as well: some users pointed out that speeding up coding with Copilot was one thing, but trying to push AI into everything at once was another, and that was “exhausting.”
User Jeremyckahnexpanded on this idea, saying that he had “serious reservations” about how the industry “is navigating the AI hype,” but he didn’t see a future where technology could move forward without it.
Like several other commenters, he compared the current moment to the 90s but noted that resisting AI now was akin to resisting the rise of the internet back then: “It’s happening, whether it should or shouldn’t.”
For his part, user Thefolsomsees AI as “a tool that has its use,” something that, like Stack Overflow or Google, can’t be ignored if one wants to stay competitive. In his view, part of being a good developer is knowing “how to find the good answers and filter out the slop.”
An analysis of GitLab’s survey of over 5,000 developers and engineering leaders found that, regardless of personal opinions, rejecting AI adoption was becoming less realistic for most companies. At the same time, developers and middle managers were significantly more likely than senior executives to say their organizations were unprepared for such changes – around 25% of AI skeptics in these groups held this view, compared to just 15% of executives overall. A survey by Writer paints an even bleaker picture: roughly 75% of executives in tech and other knowledge-based industries acknowledged that AI implementation had sparked serious internal conflicts within their organizations, including strategic misalignment, power struggles, and even deliberate sabotage.
Alexey Shinkarev, Engineering Manager at Bamboo Agile
“Talk of ‘replacing developers’ or using AI to design complex architecture – I’d place all of that squarely in the realm of hype. The same goes for blindly delegating tasks that require understanding multiple overlapping contexts, including business aspects and both explicit and implicit constraints.”
“Generational divide” and the erosion of engineering culture
Recent studies, “Generative AI in the Software Engineering Domain” and “How Coding Experience Shapes Developer Perceptions of AI Tools,” point to yet another unsettling conclusion: artificial intelligence is gradually creating a deep generational divide in how programming is perceived. According to the data, both junior and senior developers use AI at roughly the same rate; however, their understanding of its role in software development differs significantly.
For junior developers, AI often acts as a “teacher,” a source of learning and support during the early stages of their careers. This can accelerate their growth, but at the same time, it may hinder the development of essential foundational skills. Aware of the risks of overreliance, some young professionals stress that they try to follow the principle of “just for speed, not for thinking” and make an effort to maintain their professional identity in the eyes of senior colleagues. As a result, this sometimes leads to the opposite extreme: a conscious rejection of flexibility and openness to AI-driven opportunities.
In turn, senior developers, according to surveys, tend to view AI simply as a tool or a “junior colleague.”Some reject its role entirely, often out of apprehension that it could diminish their influence. In some cases, this leads to efforts to control how junior team members use AI, including outright bans, as well as to a stronger emphasis on their own irreplaceability or growing skepticism toward the technology, which they may see as a trendy but short-lived phenomenon.
Some senior engineers also view AI as a threat to “authentic” engineering craft and creativity, the kind rooted in logic and deep product understanding.
As a result, this dynamic is giving rise to a growing pattern of mutual misunderstanding. Junior developers may perceive their more experienced colleagues as overly conservative or “stuck in the past,” while senior engineers might view the younger generation as “reckless” or “spoiled” by AI. Consequently, the continuity of skill transfer and engineering culture may be at risk.
Tension is further amplified by mixed corporate signals – enthusiastic support for AI hackathons and automation initiatives in some organizations, on the one hand, and opaque security policies or sudden restrictions on AI tools without explanation, on the other.
The growing trend of vibe coding, while praised by some for democratizing access to development, also risks deepening the divide within the professional developer community.
Between innovation, law, and ethics
Copyright and licensing problems
According to Stack Overflow, 65% of the 33,700 participating developers expressed concern about data attribution when using AI tools, making it the second most pressing ethical issue among respondents. For context, misinformation ranked at the top.
The controversy stems from the fact that many large AI models were trained on open-source code and other publicly available content, often without the explicit consent of the original authors. As a result, the legal status and ownership of AI-generated code remain unclear. An analysis by Black Duck (Synopsys), published in the 2025 Open Source Security and Risk Analysis Report, found that approximately 97% of audited commercial codebases contained open-source components, and 54% of those had licensing conflicts.
A notable case that underscores this trend is a class-action lawsuit filed in November 2022 by American programmer and attorney Matthew Butterick against GitHub, Microsoft, and OpenAI. The complaint alleges that GitHub Copilot violates user licenses and copyrights by drawing on codebases and other content from public repositories without proper attribution or permission. The case is currently under review by the U.S. District Court for the Northern District of California and remains open.
Responsibility for errors and security
In addition to intellectual property issues, a separate challenge lies in determining who bears ultimate responsibility for potential errors related to AI. As noted in industry analyses, the widespread adoption of Agile development has blurred traditional lines of accountability. On the one hand, technical leaders, such as CTOs and senior engineers, are responsible for ensuring that AI integrations align with development standards and quality expectations. On the other hand, business leaders often bear the ultimate financial and strategic consequences, particularly in the event of failures or other adverse outcomes.
At the same time, technical teams, executives, and customers may assess the impact of AI on software quality and reliability in quite different ways.
Freedom of research and corporate monopolies
Following the dismissal of Timnit Gebru from Google in December 2020 and the subsequent widespread protest by scientists in the U.S. technology and academic communities, an active discussion emerged in the media about the influence of major tech corporations on the development of artificial intelligence.
Two problems can be identified here. The first involves economic and competitive issues, while the second relates to academic freedom and ethical considerations.
The first problem centers on the fact that corporations such as Google, Microsoft, and Amazon possess significant resources, enabling them to lead in AI development and set industry trends. Smaller companies, lacking such capabilities, are often dependent on the tools and computational power of these giants, limiting their competitiveness and exacerbating inequality in both innovation and technological access.
On the other hand, there are growing apprehensions that corporations may exert pressure on independent scientific research, including studies conducted within their organizations. This could lead to situations where developers and researchers working with AI cannot speak openly about problems if doing so conflicts with the commercial or ideological interests of corporations, even when transparency is critical.
Examples of such topics include research into the mental and professional challenges developers face when working with AI, as well as new environmental, social, and security risks linked to its broader impact.
Conclusion
The growing divide around AI in development is now widely seen as more than just a technical disagreement. Concerns about the future of engineering, generational conflict, and reports of corporate pressure are already affecting entire tech sectors – and could gradually reshape the industry.
Public sentiment is likely to play an increasingly important role in how AI evolves – and, indirectly, in AI-assisted software engineering.
Against this backdrop, companies should look beyond AI automation gains and consider the broader context of AI integration – from technical risks like security and reliability to unspoken resistance within teams and potential tensions between junior and senior developers. It’s also important to keep in mind the ethical and legal aspects of using AI in development.
All these issues should be considered not only when rolling out AI tools across developer teams, but also when building long-term, AI-focused strategies at the organizational level.
In the final Part 4 of the article, we’ll take a practical look at how to approach AI adoption in ways that help minimize technical, organizational, and social risks.
Partner with a team that knows where AI helps – and where it hurts
12. “Generative AI in the Software Engineering Domain: Tensions of Occupational Identity and Patterns of Identity Protection”, Anuschka Schmitt et al., 2024, https://arxiv.org/pdf/2410.03571
13. “From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools”, Ilia Zakharov et al., JetBrains, 2025, https://arxiv.org/pdf/2504.13903
We use cookies to analyze user behavior and improve the website for you. Check our Privacy Policy for more information.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.