Annotated Bibliography
Barnes, Aaron J., Yuanyuan Zhang, and Ana Valenzuela. “AI and Culture: Culturally Dependent Responses to AI Systems.” Current Opinion in Psychology, vol. 58, Aug. 2024, article 101838. Elsevier, https://doi.org/10.1016/j.copsyc.2024.101838.
This article focuses on how cultural identity shapes people’s reaction towards AI. This article specifically argues that differences in individualism and collectivism is the reason for why AI is adopted and trusted at different rates in certain countries. The results have shown that in individualistic cultures, like the US, people see AI as something separated from themselves, and it has a chance to violate their rights of autonomy and privacy. But in collectivist cultures countries like China or India, people would be more willing to view AI as some kind of extension to themselves, and they tend to believe it can help maintain societal harmony and achieve common goals. The article also mentions that culture difference can influence people’s decision-making, data sharing, and privacy in the aspect of the use of AI. For example, individualist societies tend to care more about privacy and the ability for self-control, while collectivism has a higher acceptance of AI systems that emphasize consensus and social connections. The authors use cross-culture support for these view points. The research shows that different cultures have differences in AI trust and self information disclosure.
Comunale, Mariarosaria, and Andrea Manera. “The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions.” IMF Working Papers, vol. 2024, no. 065, 22 Mar. 2024, IMF eLibrary, www.elibrary.imf.org/view/journals/001/2024/065/article-A001-en.xml.
This article summarizes the impact of AI on the economy over recent years and how various countries have introduced policies to regulate the technology. The author has sorted out the current research in the academic circle, mainly discussing how AI will affect productivity and the labor market, as well as how people’s incomes are divided. It is also said that although AI may improve the economy, it may also widen the gap between the rich and the poor. Additionally, the article compares the regulatory approaches of various countries. This is particularly relevant to our group’s research, as we have always been concerned about which factors will affect the development of AI. At the same time, I think the advantage of this article is that it has very comprehensive information and compares the situations of several countries, making it very convenient for us to quote when conducting research.
De Graaf, Ysanne, et al. “Societal Factors Influencing the Implementation of AI-Driven Technologies in (Smart) Hospitals.” PLOS ONE, vol. 20, no. 6, June 2025, p. e0325718. https://doi.org/10.1371/journal.pone.0325718.
This article specifically discusses the various factors that can affect the application of AI technology in hospitals. It mainly focuses on people’s trust in AI. They are worried that their personal privacy will be leaked, and about whether the medical staff has received training. All these will determine whether AI can truly be accepted by everyone. This is in line with the research topic of our group. It makes me more clear that AI can develop not only on whether the technology is good enough, but also on whether society accepts it and whether there are relevant policies to support it. This article uses many real cases from hospitals, making people feel particularly credible. However, its limitation might be that the conclusions within it may not be applicable to AI applications in other places, such as schools.
Demaidi, Mona Nabil. “Artificial Intelligence National Strategy in a Developing Country.” AI & Society, vol. 39, 2024, pp. 1237–1251. Springer, https://doi.org/10.1007/s00146-023-01779-x.
This article focuses on the key components of education, entrepreneurship, government, research, and legal frameworks for developing countries when building their AI readiness. This reading focuses their research specifically on Palestine where they interview private, public, government, and educational sectors where the results showed that there is low awareness on AI showing that the country does not know the true potential that AI has to offer. This reading offers us surveys, interviews, and focused-groups and it also aligns with pillars from our data set that help us show that societal factors like education, awareness, global partnerships, and policy matter in a developing country’s development. Based on the findings, Demaidi proposes a 5 year plan that focuses on capacity building rather than technology itself which can be useful in other similar contexts for developing AI and human capital.
Filgueiras, Fernando. “The Politics of AI: Democracy and Authoritarianism in Developing Countries.” Journal of Information Technology & Politics, vol. 19, no. 4. Taylor and Francis, https://www.tandfonline.com/doi/full/10.1080/19331681.2021.2016543.
Filgueiras focuses on how authoritarian and democratic countries affect AI development by looking at 30 different countries and finding that authoritarian regimes generally have a better and stronger AI outcome. The author credits this to be because of the centralized decision-making, fewer regulation constraints, and greater control over data on the contrary of democracy having checks and balances and slower policy making. Though authoritarian AI development is stronger, democratic countries are more ethical and have better characteristics for AI governance. Filgueria’s work is important because this goes beyond education and economics. These regimes are important when adding depth to our project since we can weigh out what societal factors really are the best, but at what cost. This article also gives us a good research group as it looks at 30 different countries that overlap with those in our data base which is important when making cross comparisons.
Ge, Xiao, et al. “How Culture Shapes What People Want from AI.” Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 11 May 2024, ACM Digital Library, https://dl.acm.org/doi/10.1145/3613904.3642660.
Ge investigates how European Americans, African Americans, and Chinese prefer and expect AI to work, this article works as a perspective of how culture affects how AI is seen. This is something useful to our research since it shows the societal readiness for AI, acceptance/adoption, and design orientation are culturally mediated. The values and vision of culture in certain areas shape the trajectory in which AI goes into which can have an intercept in our society and technological development factors when looking at our data base. Ge found that Chinese participants prefer having a connection rather than control over AI and prefer AI to have higher capacities to influence than other parties. The preference of African Americans generally fell between European American and Chinese which overall shows how different groups of people have different ideal AI systems and therefore can be a reason of different development rates as well.
Hassan, Masooma, et al. “Barriers to and Facilitators of Artificial Intelligence Adoption in Health Care: Scoping Review.” JMIR Human Factors, vol. 11, no. 1, Aug. 2024, p. e48633. https://doi.org/10.2196/48633.
Hassan, Kushniruk, and Boritsky reviewed the factors influencing AI applications in real-world healthcare settings. Their core conclusion highlights that beyond algorithms, critical factors include, social and organizational trust driven by explainability, transparency, and verifiable performance.Also clear governance and oversight, robust data integration capabilities, workflow adaptability and usability, leadership support, and staff training. Fundamentally, whether AI transitions from pilot projects to practical implementation hinges on human and institutional factors. Lack of trust, regulation, and skills will lead to stagnation in adoption and hinder scalability. This conclusion resonates with findings from other healthcare and industry studies, providing valuable cross-validation. It is important to note that this study is limited to the healthcare sector and focuses on thematic analysis rather than causal verification. Therefore, caution should be exercised when extrapolating these conclusions to other industries.
Henzler, Dennis, et al. “Healthcare Professionals’ Perspectives on Artificial Intelligence in Patient Care: A Systematic Review of Hindering and Facilitating Factors on Different Levels.” BMC Health Services Research, vol. 25, no. 1, May 2025, p. 633. BioMed Central, https://doi.org/10.1186/s12913-025-12664-2.
Henzler et al. conducted a systematic review specifically examining healthcare professionals’ perceptions of AI in clinical applications. They categorized the enablers and barriers to AI implementation into three levels: “individual, organizational, system.” Their findings revealed that the core challenges lie not in the models themselves, as well as privacy and accountability. These social and institutional factors ultimately determine whether AI can transition from pilot projects to routine clinical practice. This offers valuable insights for our research on “how social factors drive AI advancement,” clearly indicating that people, processes, and systems must all be considered. Otherwise, even the most advanced technological development cannot be practically applied.But it focuses solely on the healthcare sector, primarily from the subjective perspective of healthcare professionals, and lacks causal testing and cross-industry comparisons. But as a reference framework for mapping social factors, it remains highly practical.
Lazăr, Sorin Paul, et al. “Socioeconomic and Cultural Determinants of the Development of Artificial Intelligence.” Amfiteatru Economic, vol. 26, no. 66, Editura ASE, https://www.econstor.eu/bitstream/10419/300606/1/1894660536.pdf
This article examines how socioeconomic factors can shape national AI development, which directly relates to our research question. Lazǎr et al. argue that one main cultural dimension has an effect that is statistically significant. While economic metrics like GDP per capita, GDP growth, spending on research and development, and urbanization all had significant positive effects on AI development (which they use the quantitative variable called “AI intensity” to measure), of the cultural factors, only uncertainty avoidance was significant. Uncertainty avoidance describes how comfortable a society is with ambiguity and risk. This study offers direct evidence that can be used in our project, and signals a key starting point for which economic and cultural predictors to examine first. This also complements other literature in the field that emphasizes the consideration of societal factors in addition to technological ones in national AI development. One potential limitation which is also acknowledged by the authors is the limited sample size of only 60 countries, which makes it more difficult to find statistically meaningful relationships, especially when the differences are more nuanced.
Qian, Yuzhou, et al. “Societal impacts of artificial intelligence: Ethical, legal, and governance issues.” Science Direct, vol. 3, no. 100040, https://www.sciencedirect.com/science/article/pii/S2949697724000055
This article looks at the ethical, legal, and governance factors that shape AI development. Qian et al. discuss three studies and present their findings. The first develops a theoretical framework that aligns AI with standardized law and ethics. Another examines bias and discrimination in AI systems. The third designs a global framework for AI governance. The authors argue that interdisciplinary collaboration and public trust must be fostered through transparency and regulation in order to push accountable AI development. This discusses our research question directly by framing societal structures as determinants in AI development. While this collection of studies serves as a solid theoretical base for our project, it lacks the empirical data that we could benefit from in observing studies with hard data.
Radu, Roxana. “Steering the Governance of Artificial Intelligence: National and International Dimensions.” Policy and Society, vol. 40, no. 2, June 2021, pp. 178–193. Oxford University Press, https://doi.org/10.1080/14494035.2021.1929728.
This article studied the different AI strategies and governance models that are designed and adopted by national governments. Radu examines strategies from 2016-2019 and concludes that hybrid AI governance models are the most popular. In other words, both public and private actors play a key role in AI policymaking. She also finds that states tend to prioritize ethics-oriented frameworks while instituting a purposeful vagueness about responsibilities in AI governance. While this is crucial for flexibility, she finds that accountability is weakened as well. This article contributes to the discourse on how to responsibly govern AI, especially with the lack of rigidity in the current governance. It also provides context for how to analyze different national strategies and their attempts to balance both innovation and social responsibility. While this also serves as a strong foundation and offers insights to methodologically analyzing something that might be considered subjective, its main limitation is its sample size, which mainly considers developed nations.
Liu, Rong, Wenying Luo, and Ruiqian Su. “How Does Cultural Diversity Influence Corporate AI Development?” Finance Research Letters, vol. 81, July 2025, article 107506. Elsevier, https://doi.org/10.1016/j.frl.2025.107506.
This article is trying to focus on whether diversification can help companies push forward AI development. It used data from Chinese listed companies between 2000 and 2022 to reach a conclusion that it can actually affect diversification. Simply put, in cities with more dialect varieties, local companies usually put more effort into AI. Researchers first calculated the level of cultural diversity for each city based on dialect data. Then, they extracted AI-related statements from company annual reports and patents to create an indicator of AI activity. They also controlled the variables to make the result more reliable. For instance, they limit the year of the experiments, region GDP, investment condition, the degree of popularization of communication tools or the internet. Eventually, the core pattern is stable: the more diversification in dialect, the better the development of AI. The key route is the more powerful ability in innovation. The research also found that the low limitation of financing, higher marketization and not state-owned enterprises, the effect of “dialect diversity promoting AI” will be more obvious. This is the indication that these conditions can help transform cultural diversity into practical AI achievements. From both practical and policy perspectives,cultural diversity serves as a catalyst for innovation because forming diverse teams and recruiting people across regions may stimulate the development of AI projects, while easier access to financing and a more flexible market can enable these achievements to truly take root.
Ojeda-Castro, Angel, et al. “Global Drivers of Artificial Intelligence Development: An Empirical Study Using the Global AI Index Across 62 Countries.” Issues in Information Systems, Jan. 2025, https://doi.org/10.48009/2_iis_110.
This article uses data from over 60 countries in the Global AI Index to analyze which ones are the key drivers for the development of AI. Their research mentioned that how much money is invested in education, the maturity of the technology industry, whether the government provides policy support, and whether the digital infrastructure is solid are all the main reasons affecting the development of AI. This answers the question of which conditions of our group can help AI move forward and which will hold it back. Its greatest advantage is that it is supported by real data, and the conclusions it draws make me feel reliable. It mainly showcases the general attitude towards AI at the national level through big data. There are still many invisible factors that can affect the development of AI.
Zamir, Samina, et al. “Examining the Role of Higher Education Learning, Research Excellence, and Innovation Capacity in Driving AI-Technological Advancements in Nordic Countries.” Humanities and Social Sciences Communications, vol. 12, no. 1325, 2025, Springer Nature, https://doi.org/10.1057/s41599-025-05665-3.
This article is trying to understand whether universities and research systems truly promote the development of artificial intelligence, or is it just some fantasy of everyone. The researchers used data from Denmark, Finland, Iceland, Norway and Sweden from 2009 to 2023. There are three aspects the article is focused on: The researchers used data from Denmark, Finland, Iceland, Norway and Sweden from 2009 to 2023. The article focuses on three aspects. First, the training situation of higher education. They study how many students were recruited and how much was spent, and whether graduates found employment smoothly. The second is scientific research strength which is judged by the ranking of scientific research achievements. The third is the innovation ability. The results are very clear that all three aspects can promote the development of AI. “Good governance” will make this effect even stronger. The most obvious is the strengthening of higher education cultivation, and innovation is the next important factor.discovered a two-way impact in the development of AI. The universities and research systems that drive AI forward, and the development of AI will also help the education and research systems progress in turn. The article also has its drawbacks: using national-level data for calculation may blur the specific impacts at the local level.
Khanfar, Ahmad A., et al. “Factors Influencing the Adoption of Artificial Intelligence Systems.” Management Decision, 8 Jan. 2025, Emerald Insight, www.emerald.com/insight/content/doi/10.1108/md-05-2023-0838/full/html.
This article explores the main factors that affect how organizations adopt AI systems. Khanfar et al. group these factors into five categories: individual, social, organizational, environmental, and technological. Using a systematic review of 91 studies, they identify the most common influences and challenges across industries. The article is important because it provides a structured understanding or what drives or limits AI use in practice. This shows that adoption depends as much on people and institutions as it does on technology itself. For our thesis, it gives us a useful framework to connect national and organizational readiness. It highlights that successful AI development requires coordination between technological capability, leadership, and cultural support. It also helps us show that awareness and perception play a huge role in influencing how AI is embraced across different contexts.
Chui, Michael, et al. “The Economic Potential of Generative AI: The Next Productivity Frontier.” McKinsey & Company, 14 June 2023, www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#/.
Chui et al. argue that generative AI has the power to transform the global economy by significantly increasing productivity and efficiency. Their report estimates that this technology could add between $2.6 and $4.4 trillion in value each year, based on 63 detailed use cases across multiple sections. They also emphasize that these gains depend on whether organizations can successfully train and reskill workers to adapt to new roles. This reading is important because it links AI directly to economic performance and shows how technological innovation can modify labor markets. For our thesis, it helps connect the social and economic sides of AI development by showing that education, adaptability, and workforce readiness are essential for realizing AI’s potential. It also supports the argument that societal and policy support must work with technological investment in order for long-term growth.
Hjaltalin, Illugi Torfason, and Hallur Thor Sigurdarson. “The Strategic Use of AI in the Public Sector: A Public Values Analysis of National AI Strategies.” Government Information Quarterly, vol. 41, no. 1, 2024, p. 101914. Elsevier, https://doi.org/10.1016/j.giq.2024.101914.
This article examines how different governments design and justify their national AI strategies through the lens of public values. Hjaltalin and Sigurdarson analyze 28 strategy documents and find that governments tend to emphasize values such as efficiency, transparency, innovation, and fairness, but to varying degrees depending on their political systems. The authors argue that these differences show how countries balance technological progress with ethical and social responsibility. This article is important because it highlights the connection between governance, political culture, and the direction of AI development. For our thesis, it adds a crucial dimension by showing that AI policy reflects national priorities and value systems, not just economic goals. It provides us with evidence for a potential argument that how a society governs AI is a reflection of what it values most, whether that is innovation, accountability, or public trust.
Xie, Yu, and Sofia Avila. “The Social Impact of Generative LLM-Based AI.” Chinese Journal of Sociology, Feb. 2025, https://doi.org/10.1177/2057150X251315997.
Xie and Avila argue that generative large language models (LLMs) are changing social and cultural life. They show that while these systems can democratize access to knowledge and creativity, they can also enlarge inequality groups and nations, depending on how they are used and governed. Their study combines sociological theory with global case examples to explore these impacts. This article is important because, aside from the technical side of AI, it focuses on how society responds to and is transformed by it. For our thesis, it helps illustrate how social factors like inequality, trust, and cultural attitudes shape both the speed and direction of AI development. It also gives evidence to the idea that success from AI depends on innovation in addition to how societies choose to integrate and adapt these technologies.
Yu, T., W. Yang, J. Xu, and Y. Pan. “Barriers to Industry Adoption of AI Video Generation Tools: A Study Based on the Perspectives of Video Production Professionals in China.” Applied Sciences, vol. 14, no. 13, 2024, p. 5770. https://doi.org/10.3390/app14135770.
Yu, Yang, Xu, and Pan’s 2024 study, published in Applied Sciences, focuses on the application of AI video generation tools in the video production industry. Through expert interviews and a questionnaire survey of Chinese video practitioners, it identifies non-technical factors, such as market demand, collaborative efficiency, AI acceptance and trust, institutional environment, corporate training, and process adjustments, that directly determine whether tools can be implemented beyond the demonstration phase. This research provides concrete answers from the entertainment and film industry to the question “Which social factors most effectively drive AI development?” Specifically, clear market and institutional incentives, organizational learning and collaboration mechanisms, and positive user attitudes collectively propel the large-scale adoption of AI, driving technological advancements in turn. While the study’s samples and scenarios are concentrated within China’s video industry, requiring caution in extrapolation, its findings offer significant reference value.
