- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Can GenAI Promote Complexity Skills Of Scientists? - A Hypothetical Observation
Günter Müller-Czygan1, Julia Frank2 and Viktoriya Tarasyuk3
1Prof. Günter Müller-Czygan, Institute for Sustainable Water Systems (INWA) at Hof University of Applied Sciences, Germany
2Dr. Julia Frank, Institute for Sustainable Water Systems (INWA) at Hof University of Applied Sciences, Germany
3Viktoria Tarasyuk, PhD, Institute for Sustainable Water Systems (INWA) at Hof University of Applied Sciences, Germany
Submission:December 15, 2024;Published: January 10, 2025
*Corresponding author:Günter Müller-Czygan, Institute for Sustainable Water Systems (INWA) at Hof University of Applied Sciences, Germany
How to cite this article:Günter Müller-C, Julia F, Viktoriya T. Can GenAI Promote Complexity Skills In Scientists? - A Hypothetical Observation. Ann Soc Sci Manage Stud. 2025; 11(2): 555810.DOI: 10.19080/ASM.2025.11.555810
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Abstract
During the corona pandemic, the role of science was seen as something very important in broad sections of society. Unfortunately, this appears to be less the case with regard to climate change. Increasingly, subjective and emotionally based statements are dominating the discussion, which must be viewed extremely critically in view of the complex interactions and extreme effects. On the other hand, great hope is being placed in science to master the complex challenges with the help of artificial intelligence (AI). With the public appearance of generative AI (GenAI) in the form of ChatGPT, a new, partly critical discussion is being held, although AI-based technologies have already made impressive progress as a result of scientific research for many years and numerous developments are already in real use. On the one hand, science is expected to provide insights and recommendations for the responsible use of AI. On the other hand, the use of AI in research work is also discussed critically because it is used, for example, in the application phase of research projects or to evaluate the data obtained and to write down and communicate the results. This gives rise to questions such as “How valid are scientific achievements that are produced with (the help of) AI?” Especially when using generative AI, there is a risk that the own creation process will degenerate due to uncontrolled use, excessive dependency and distortion of results [1], because activities that go beyond routine processes are conveniently left to generative AI, which can be a problem especially when processing complex tasks. An analysis of the authors of around 30 master’s theses from 2023 and 2024 in an international engineering master’s program showed that around 1/3 of the theses are strongly influenced by GenAI results, recognizable by the usual listing form, the text style and the lack of reference to the task. These papers were also among the 1/3 with the lowest grading. Without guidance on the effective use of GenAI, its use does not appear to lead to improved performance, but instead encourages the unthinking copying of text modules.
On the other hand, observations of the use of generative AI in everyday teaching and research show that GenAI can promote complexity competence in particular when used in a targeted manner on the basis of appropriate training. The authors use GenAI in various contexts of their research, increasingly to support the solution of complex problems and tasks in complex environments. The main area of application is water management and increasingly the analysis of urban areas to adapt to water-related challenges caused by climate change. The main focus here is on the massive impact of extreme weather events such as heavy rainfall and prolonged periods of drought on urban and regional infrastructure as well as forest and agricultural areas. These main areas of application are not only highly complex in thematic terms. Solutions must be implemented at urban and municipal level, and here too, the spatial, infrastructural and organizational environment is highly complex. It is against this background that the observations described in the use of generative AI were made, the theoretical framework described in this article was created and the hypothetical analysis was carried out, for which empirical evidence is still pending, but in preparation.
Keywords:Complexity; GenAI; Loss of Creativity; Loss of Innovation; Prompting
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
One of the major challenges of the future in the context of work is maintaining the ability to act in complex environments [2]. If this fails, it will place new burdens on employees and demands on organizations. Research from [3] has shown that the ability to deal with complexity is a predictor of work performance. In their studies, this ability was more evident in complex situations in a targeted way than the use of general intelligence. In the year 2000, Steven Hawking called the 21st century in an interview in San Jose Mercury News the century of complexity. In doing so, he referred to the combined interaction of (natural) laws that had previously been considered in isolation and the lack of knowledge of how these fit together and what happens under extreme conditions. Exactly this interaction of various natural phenomena (laws of nature) that have so far been isolated and self-stabilizing under extreme conditions can be seen in the so-called tipping points of climate change. Nine global “core” tipping points, which contribute significantly to the functioning of the Earth system, and seven regional “impact” tipping points, which contribute significantly to human well-being are of great value as unique features of the Earth system, have been identified [4]. Tipping points are followed by inflection points, which signal periods of rapid and intense changes in behavior, perceptions, actions or conditions [5]. Depending on the context, these inflection points can mark irreversible changes (as in the natural sciences or complexity theory) or changes so significant that they could be said to mark paradigm shifts (as in political regime change). Following [6], it should also be noted, that after a certain threshold has been crossed, a self-sustaining acceleration occurs. As in the case of exceeding the 1.5°C limit in connection with climate change, this can have negative consequences if this development is not changed in time (now we know it has negative consequences). Or, with sufficient and early knowledge of the relevant interrelationships, a phenomenon can be consciously imitated and controlled. Major changes such as those needed to combat climate change, are inherently difficult to plan and even more difficult to implement in a democratic process. Nevertheless, there is a large body of scientific evidence suggesting that under certain circumstances, complex systems can undergo deliberately initiated disruptive system changes [7], [8], [9]. For this reason the mechanisms behind such disruptive system changes, which could be used to initiate changes, are attracting rapidly increasing attention in science. Accordingly, future scientific personnel in particular must have appropriate complexity skills to understand these processes and to find methods to influence them. The extent to which complex knowledge keeps pace with the increasing complexity of the present has been widely studied in the field of project management. For example, [10] leads in his studies to the conclusion that project complexity is a major cause of persistent project failures, as it disrupts project stability. He also notes that the effects and definition of project complexity continue to be discussed in the project management community, and research is still in the process of identifying what knowledge and skills can help project managers deal with project complexity. This suggests that the project participants have insufficient skills in the area of complexity. Another example of the failing effect of complexity can be seen in digitization projects in companies [11] and municipal organizations [12]. In particular, the lack of integration complex future requirements into everyday work is the biggest hurdle to project success [13]. On the other hand, traditional plan-driven models are repeatedly used in construction projects despite the known dynamic complexities, and these projects fail accordingly [14].
At the same time, artificial intelligence provides tools to find better solutions in complex situations when developing measures and making decisions. Human decisions, on the other hand, are often prone to error and subject to cognitive biases, especially when decisions are characterized by uncertainty, urgency and complexity. According to [15], the use of artificial intelligence (AI) can counteract human biases and bring transparency to decisionmaking processes.
However, systems, particularly generative artificial intelligence systems, are only as good as their training data and can perpetuate biases or even generate and spread misinformation [16]. Although generative AI (GenAI) systems offer opportunities to increase user productivity in many tasks, such as programming and writing, studies show that users work ineffectively with GenAI systems and lose productivity. In [17] four main reasons for productivity losses are listed when using GenAI systems: a) a shift in user roles from production to evaluation, b) an unhelpful restructuring of workflows, c) interruptions, and d) a tendency to use automation to facilitate simple tasks and to complicate difficult tasks (or to avoid difficult tasks). Point d) is of particular importance in the context of complexity, since GenAI can “tempt” us to make our working lives easier and to produce useful results with comparatively little effort, but these is insufficient for complex solutions. This was shown, for example, in a study that examined whether GenAI (here ChatGPT) is better suited than humans for the qualitative analysis of text data [18]. The results showed that GenAI can process large text data sets much faster, but the reliability of the content and thematic assignment was lower than in human analysis, so that the authors concluded that hybrid approaches are necessary, precisely because humans are better than GenAI at identifying nuanced and interpretative topics
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
AI in science - curse or blessing?
As early as 2023, [19] summarized the main threats posed by the use of generative AI in three points: Risk of job loss, risk of counterfeit content and the risk of sentient machines. In his opinion, the greatest risk of generative AI is that “generative AI will unleash an entirely new form of media that is highly personalized, fully interactive and potentially far more manipulative than any form of targeted content we have faced to date”. This danger also applies to scientific work. Studies show, for example, that while the creativity of individuals can be increased through the use of generative AI, the results within a scientific community as a whole lose their novelty [20]. [21] investigated the effect of the uncontrolled use of GenAI when using picture generator Midjourney on the individual cognitive styles in the design process of 30 industrial design graduate students. The cognitive analyses showed that while GenAI significantly increased the frequency of reflections in all student groups, there were minor significant differences between cognitive styles. In addition, two experts evaluated all design outcomes and found no significant differences in novelty, variety, integrity and feasibility between the different cognitive styles. The risk of significant bias in beliefs showed [22] in students, which, given the widespread use of GenAI among students, is of crucial importance for the further career path, and in consequence for science too.
The World Economic Forum, on the other hand, sees the use of AI as a boon for science. AI for scientific discovery is recognized here as one of the top 10 emerging technologies of 2024. Generative AI models have the potential to significantly accelerate scientific discovery. By analyzing extensive scientific data sets, formulating innovative hypotheses and identifying promising experimental approaches, they enable researchers to generate research results more efficiently and quickly. In addition, generative AI has already been used to identify connections between different subject areas and to determine findings from the interfaces between the analyzed disciplines [23]. Particularly in creative problem solving, which is an important component in the processing of complex tasks and problems, [24] see a change in the role of humans when Gene-AI-based agents are used to provide support. The agentbased creative systems offer a wide range of possible uses, e.g. for idea generation, evaluation and feedback. The role of humans is changing as the systems become increasingly autonomous. They act more as process managers than as drivers. Specialist knowledge and expertise are still fundamental, but people also need to develop new skills such as rapid engineering and curation.
In addition, there is more emphasis on fostering and exercising humanistic qualities such as empathy and ethics, subjective experience, critical judgment in social and historical contexts, and extrapolation, i.e. imagination beyond data, rather than interpolation, that means the ability of pattern recognition [25]. Nevertheless, the scientific community is afraid of losing trust if AI is used too extensively or incorrectly. [26] explain that trust in technology has so far been closely linked to its reliability. Here, the actual technical reliability meant is equated with a good feeling in the human encounter (e.g. with the manufacturer, the operators or the auditors), and the technical review is omitted. This reveals trust as a typical element of complexity reduction [27], which functions as a mixture of knowledge and non-knowledge [28], [29], [30]. [31] even see this as a central component of digitalization in order to be able to deal with increasing complexity. The more widespread use of AI systems must add this aspect to the previous consideration and reassess reliability. [32] points to an existing skeptical view of AI, namely because “trust in AI systems is not compatible with the lack of transparency that is characteristic of these systems”.
At the same time, he rejects excessive euphoria about the potential solutions offered by AI and advocates a balanced approach to AI, summarizing it as “that reasonably dosed trust in AI systems should always go hand in hand with epistemic vigilance”. Another question that has arisen in the course of the discussion about the use of AI systems concerns the replaceability of scientists by AI. [33] state that this can only happen if an AI system (e.g. as an agent) imposes the rules of its actions on itself, including changing the rules and determining the means. This is currently and foreseeably not the case. Based on the concept of knowledge and action, humans - despite developments in AI, big data and machine learning - will remain the central actor in research-based knowledge work and cannot be (completely) substituted. For science itself, according to [34] there are not enough studies that have examined the influence of AI on scientific work in detail. For this reason, the focus is primarily on areas related to science, such as knowledge work, in order to transfer the extensive results available here to science.
In addition to the issue of trust, it is becoming increasingly apparent that digital technologies, including the associated infrastructure, are becoming more complex and therefore less transparent. [35] state that with the increasing use of such technologies, knowledge of the functions and interrelationships behind the system is reduced and the balanced approach, preferred by [32] is made more difficult or even impossible. As a consequence, one side categorically rejects such systems, while the other side is convinced of the results (however they were achieved) and has blind faith in them. This can become critical if AI-based systems also influence or even take over complex decisions, as is increasingly the case in application processes [36] or in a scientific context, when AI-generated results are not questioned or verified
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Increasing complexity of reality vs. complexity of information systems / AI / organizations
In the development of information systems (including AI), the relationship between task complexity (which describes the goals/ results that an IT solution should achieve) and the associated information process (result output) is crucial for the quality of results and usability by the user. Although the creation and use of IT in the execution of tasks is a central area of research in the field of information science, a better understanding of task complexity should be of great benefit to researchers in this field. However, this proves difficult because there are no adequate consistent definitions for it, although there is good guidance in other fields such as behavioral science. This is even more important to keep in mind as the use of generative AI increases and expands to all (complex) economic and social sectors. Against the background of complexity challenges, it is crucial to understand how generative models produce results in order to build trust and understand their behavior in real-world applications [37]. Without knowledge of the (real) complexity conditions, however, it is not possible to use generative AI in a targeted manner.
Using the example of process system engineering (PSE) [38], the importance of aligning GenAI capabilities with the multi-scale requirements of PSE, ensuring robustness for greater safety and trust, managing data availability and heterogeneity, and developing relevant evaluation measures and metrics. On the other hand, the role of multimodal LLMs in improving solution strategies in PSE was particularly emphasized. Multimodal structures play an important role in the context of complexity.
Organizations are increasingly operating in a complex environment. In order to cope with this, they increase their internal complexity to such an extent that the requirements of the environment can be met appropriately [39]. The effects of climate change in particular are increasingly ascending the complexity of the perceptible world, which is often the subject of research in research organizations. It can therefore be assumed that a similar adaptation of internal complexity to a complex environment will also take place in research organizations. Particularly affected by increasing complexity due to climate change are urban spaces, an increasingly interdisciplinary field of research, which already face major challenges due to inherent processes of managing interrelationships between population, infrastructure and institutions, and will experience an enormous increase in complexity due to the additional pressures of climate change [40]. On the other hand, a study by [41] shows that study participants with lower cognitive complexity are less likely to believe in anthropogenic climate change than those with higher scores. This clearly showed that study participants with higher cognitive complexity were more likely to believe in climate change when they were exposed to a presentation of opposing arguments (i.e. misinformation) along with the correct facts about climate change. In contrast, study participants with lower cognitive complexity were more likely to believe in climate change when presented with climate change facts on their own. In the study by [42] it emerged from the example of spatial planning that the optimum result can only be achieved if several concepts of complexity theory are taken into account. Overall, several levels of complexity come together in climate change, which cannot be adequately addressed with conventional, topic-focused research. In one research project, [43] emphasize the finding that the necessary adaptation governance in response to climate change is highly complex, even at a smallscale level, and can only be effectively addressed by inter- and transdisciplinary research. Similar experiences were also made by [44], [45].
Environmental and resource issues in particular are closely linked and interdependent. In order to understand these challenges and mitigate their consequences, interdisciplinary research and research-based decision-making are required. Taking the development of the research system in Finland between 1990 and 2010 as an example, [46] shows how research in the field of environmental and resource protection, which was previously sectoral, i.e. subject-related (monodisciplinary), had to develop horizontally, i.e. across sectors (interdisciplinary), due to increasing complexity. He showed that the prioritization problems between horizontal and sectoral research resulting from the increasing complexity led to friction, so that the selforganized change sought by the initiators was not achieved. One of the reasons for this was that formal aspects of the change, such as the allocation of financial resources, had to be adjusted, but the necessary complexity competence of those responsible, i.e. the management of uncertainty, interrelated problems and difficult problems, was not adequately achieved. A key unmet objective was to enable greater multidisciplinary diversity compared to individual research projects in order to deal with complexity. However, research projects are strongly dependent on their source of funding, which, without targeted management of interdisciplinarity, has led to fragmentation rather than the promotion of appropriate cooperation between disciplines. The conclusion of this study is that the transition to more horizontal (interdisciplinary) structures within the research system, the maintenance of thematic and temporal coherence in financial instruments and a deeper understanding of internal interactions in the research system are all crucial elements that contribute to the development of complexity competence in research management. Without overarching control, there will be no transformation in this direction, and thus the necessary development of complexity competence will not take place.
Added to this is the increasing thematic complexity, which must also be recorded and analyzed. The acquisition of complexity competence not only supports interdisciplinary research, but also promotes the transition to sustainability in a broader context. It is not a new insight that corresponding (systemic) skills are necessary for complex tasks and challenges. Especially in complex megaprojects in the field of construction and IT, numerous studies and publications are known that have examined the consequences of a lack of complexity competence (e.g. [47], [48]). What do these findings mean for future research competence and what role can/ must GenAI play in this?
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Tomorrow’s research - tomorrow’s expertise?
If you enter the term “research competence” in science-related
search engines, you will receive several thousand to several
million results, depending on the language. It is noticeable that
there is hardly a uniform definition, but that research competence
is discussed from a wide variety of perspectives and specialist
disciplines. In an arbitrary selection as a random sample from
the results of the search engine query, [49] state that aspects
such as independence, search, initiative, experimentation,
practical activity, a situation of uncertainty, collaboration, the
existence of different points of view and contradictions are
key concepts of research competence. [50] have attempted to
develop a Research Competencies Scale (RCS) to make research
competence measurable. In the course of their research, they
identified various deficits which, in their opinion, indicated a lack
of research competence (e.g. after reviewing published articles in
specialist journals by experts, they came to the conclusion that “a
large proportion” of printed papers should not be published (cited
in [51]), or that according to [52], the majority of researchers
reviewed did not report statistical power, effect sizes or
psychometric properties of the instruments). In their study, [50]
described four research competencies:
i. Generating research ideas/literature research
ii. Research methodology/processes
iii. Research ethics and
iv. Dissemination of research results/scientific writing.
On this basis, 69 items were identified in this six research
areas:
i. Research inquiry/literature research
ii. General research methodology/processes
iii. qualitative research methodology/processes
iv. quantitative research methodology/processes
v. Research ethics and
vi. Dissemination of research results/scientific writing).
[53] also created a research literacy evaluation tool for the education sector to assess aspects such as the proposed use of results, target population, conceptualization of the construct, instrument format and creation of validity evidence. The results showed that these instruments are used both to assess the acquisition and mastery of research skills and to evaluate the effectiveness of proposed interventions/measures.
As GenAI enters society, business and ultimately science, the question arises as to whether the current understanding of research competence (and thus the education and training of students and young researchers) is sufficient to meet the demands of a world that is increasingly driven by GenAI [54].
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Use of GenAI in science - an approach
In science, generative AI is expected to revolutionize engineering practices, research and education [55], [56]. In particular, GenAI shows great potential in supporting and improving engineering design, with tools such as machine learning and generative algorithms expected to streamline the solution of complex problems and increase creativity [57]. Accordingly, researchers and teachers are calling for greater integration of generative AI as part of the future direction of research and teaching. Initial studies on research experiences paint a differentiated picture. [58] identified three perception clusters among Danish researchers using exploratory factor analysis: “GenAI as a workhorse”, “GenAI only as a language assistant” and “GenAI as a research accelerator”. Different opinions on the impact of GenAI on research integrity were identified. Language processing and data analysis were generally rated positively, while the experiment design and peer review tasks were more likely to be criticized. [59] was able to determine among Bachelor students that the use of ChatGPT in technical design tasks was used in different ways. Around 60% of the approximately 400 respondents stated that they used GenAI to a considerable extent in their projects, and around 40% saw limited or no benefit in its use. A specified analysis with NotebookLM from Google of 51 selectively chosen special articles published between 2020 to 2024 were done by the authors on the topic of AI or GenAI in research provided an initial overview of the use, benefits, challenges and risks; the results are summarized in Table 1.

Despite the potential outlined in Table 1, there are only a few published studies that describe the precise application of GenAI in quantitative data analysis [60]. The authors research on which this article is based has confirmed this. Despite extensive research in November/December 2024 in databases such as Elicit, Semantic Scholar and Google Scholar, only 51 qualitatively acceptable publications could be found. This shows that the use of generative AI in research is still at an early stage. The few publications and this small study already show a wide range of possible applications and potential. There is a strong need for further research to deepen and improve the application of GenAI in various research methods [60]. In addition, the development of best practices for the use of GenAI in research, taking into account ethical and data protection aspects, is necessary [60], as well as determining the effects of the use of GenAI on the world of work in research, e.g. with regard to working conditions and the qualification requirements of researchers [34]. As a possible consequence of the increased use of GenAI, job profiles may change, e.g. in the field of prompt development and AI training (this also affects research) [68], [64]. This also means that there may be a change in work processes, the focus is likely to shift from the execution of routine tasks to the control and monitoring of AI systems [68], [64].
In terms of innovation and creativity, it is assumed that GenAI can accelerate innovation processes and help organizations to develop new ideas, products and business models [68], [67]. Initial studies, e.g. by [69] on the use of GenAI in the development of innovative business models show that GenAI can significantly influence business models in all industries in the areas of value creation innovation, innovation of new offerings and value gain innovation. [70] show that GenAI has the potential to drive innovation and growth. They emphasize the need for a balance between innovation and cost efficiency.
Secondly, [70] found that GenAI will play a significant role in the provision of information in decision-making processes to improve market analysis and R&D functions. However, as in other studies, there is a lack of case-based, quantitatively validated findings. Of greatest importance, the role of GenAI is seen as helping to open up new areas of knowledge by analyzing complex data and identifying new patterns and relationships [60], [71].
In summary, three main aspects for future scientific work can be derived from the author’s study. Firstly, there is a lack of meaningful and valid studies on the use of GenAI in research, including an investigation of which GenAI is used in which research areas and how it is used in each case. The second area concerns the question of how GenAI can be used most effectively for the various scientific tasks. The third aspect revolves around the question of which skills scientists need to acquire in relation to GenAI in order to be able to adapt to the emerging developments. The use of generative AI in research offers enormous potential, but also poses challenges and risks, as outlined before. By overcoming these challenges and promoting the necessary skills and framework conditions, GenAI can become a valuable tool for research and contribute to new scientific findings and innovations.
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Why correct prompting promotes complexity competence
i. In order to answer the question “Can GenAI promote
complexity skills in scientists?” on the basis of the results
presented on the analysis of complexity and GenAI use in science,
an evaluation of the 51 selected sources examined on prompt
structure was carried out using again NotebookLM as a supporting
system. The analysis shows that prompt structures should be
clear, concise, specific and contextual. It should provide the GenAI
model with all the necessary information to understand the task
and generate the desired output. Effective prompting therefore
requires a combination of technical skills and creative thinking.
Table 2 shows results in detail of the analysis of chosen 51 sources
on what constitutes good and successful prompting.
ii. The comparison of the results with the following
selected literature studies on complexity competence shows, that
there are some overlaps and similarities to good prompting, even
if, according to [78] the attempts to record and catalog criteria for
complexity competence are still manageable.
iii. [79] describe, for example, that the causal networking
of individual functions in IT systems to form new individual
functions strengthens the analytical thinking and complexity
skills of IT specialists. A comparable effect is expected from good
prompt engineering, as asking the right questions also only leads
to relevant and meaningful content through appropriate analytical
thinking and prior analysis of the task complexity to be processed.
iv. [80] emphasizes the need for multidimensional
observation competence in science, which can be significantly
improved by the application of GenAI if the required literature
work in various dimensions can be considerably shortened due
to the time savings. In other words, where a multidimensional
approach was previously omitted for reasons of cost, this is no
longer a criterion for exclusion.
v. For water management, the authors’ main field of work,
emphasize [81] based on the evaluation of the flood of the century
in the German Ahr valley in 2021, that the following factors are
decisive for complexity competence in dealing with disasters:
vi. Ongoing monitoring of the institution’s crisis
management and adjustments based on this,
vii. filling organizational and/or structural gaps and
viii. taking actions other than those learned in education and
training in order to deal with the crisis more effectively and/or
efficiently.
ix. Here too, good prompting offers the possibility of
creating quick and multi-perspective queries to support these
three points in their application.
x. [82] make it clear in their focus work on the consideration
of complexity in management tasks in the corporate environment,
that complexity competence requires a minimum degree of
flexibility in order to be able to adapt to the changing complexities
of the environment (organization) and adapt if necessary. This is
often achieved through experimentation and an iterative approach
to solutions and thus corresponds to one of the approaches of
prompting described by [75], recommended and listed in Table 2.
xi. In a study on the use of GenAI by visually oriented
designers, [83] found that the group studied often have difficulty,
identified specific problems in creating and fine-tuning prompts,
and the need to accurately translate intentions into rich inputs
in GenAI. The authors then developed a tool for multimodal
input of prompts with input from the designers. As a result, the
designers developed innovative uses of Design-Prompt, including
the development of sophisticated multimodal prompts and the
creation of a multimodal prompt pattern to maximize the novelty
of their designs while maintaining the necessary consistency.
This was only feasible because the authors used a structured
observational study of the 12 professional designers to identify
their intentions in using GenAI, alignment of expectations, and
perceptions of AI transparency, and to transfer these to the
multimodal input tool.
xii. Similar to [83], [73] has also developed an adapted
multimodal input tool. He focused on prompting adapted to
systemic coaching by analyzing processes and participants. It is
referred to as triadic coaching because, in contrast to the original
GenAI and the examples presented, the human coach remains
a fixed real partner in the utilization process (triadic = three
participants) in addition to the GenAI input tool for the user
(coachee).

- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Does structured prompting help when dealing with complex conditions? An initial practical test.
In order to subject the initial question “Can GenAI promote complexity skills in scientists?” to an initial small-scale practical test against the background of the literature analysis carried out, students in the “Smart Water” course of the Master’s program “Sustainable Water Management and Engineering” at Hof University of Applied Sciences got a task to carry out a complex multi-criteria analysis using the method of multi-level analysis [84], [45] for the digitization of a new sewage treatment plant to be built in Germany (170,000 population equivalents). The result of this analysis is the examination object of the course and is to be submitted as a written report at the end of the semester. The task was presented and discussed at the end of November 2024. After the lecturer presented the wastewater treatment plant project and the method of multi-level analysis, the students were able to deal with the task in class on beginning of December 2024 and develop initial considerations. In the process, it became apparent that there were comprehension problems regarding the complexity of the tasks. In light of this, a teaching unit was added in mid of December 2024, where students were able to continue working on the tasks with the help of GenAI under the instructor’s guidance (students primarily used the freely available version of ChatGPT for this).
This teaching unit was used to conduct a survey among
students to find out what help GenAI can provide when working
on complex tasks. The following identical questions were asked at
the beginning of the lesson and after working with GenAI:
i. How well did you understand the overall task of creating
a digitalization strategy for the new wastewater treatment plant?
(scale of answers => 1 = not understood at all; 7 = fully understood)
ii. How safe do you feel transferring the selected partial
task description to create a digitalization strategy for the new
wastewater treatment plant? (scale of answers => 1 = not
understood at all; 7 = fully understood)
iii. How safe do you feel defining the selected task
description in more detail? (scale of answers => 1 = completely
uncertain; 7 = very certain)
iv. How safe do you feel developing goals and questions
relating to the digitalization of the selected topic? (scale of answers
=> 1 = completely uncertain; 7 = very certain)
The following additional questions were asked:
i. How confident do you feel about using generative AI
effectively and solving the subtask well? (scale of answers => 1 =
completely uncertain; 7 = very certain)
ii. Do you already have an idea of the prompting strategy
you will use to complete the subtask?”. (scale of answers => 1 = I
still don’t have a clear idea of what a prompting strategy is; 7 = I
still have a clear idea of what a prompting strategy is)
These questions were also asked at the beginning of the lesson and after the tasks had been completed. Figure 1 shows the summarized results of the seven questions asked.
The answers in Figure 1 show the mean values of all 14 students. It shows that the use of GenAI helps to better understand complex tasks. During the task, students were able to ask the lecturer questions and discuss both their content analysis and their prompting steps. In addition, the students worked in pairs, which allowed for additional verbal discussion of the task. Further studies will now focus on investigating how important it is to precisely specify promptings and what the next step in reflecting GenAI results to increase understanding of complexity should look like.

- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Scientific and Methodological basis
The findings and questions presented here serve as a first attempt to categorize subjective observations made when applying GenAI in engineering science. In terms of the GRADE approach (Grading of Recommendations Assessment, Development and Evaluation [85]) for evaluating the quality of evidence from scientific research, the presented results can be assigned to the “observational studies” and thus have a low quality of evidence [86], even if a small survey was conducted among 14 students on the use of GenAI in a complex water management project. The higher the GRADE quality, the lower the risk for the effect evaluation [87-90]. In the strict sense of the GRADE evaluation approach, the presented findings do not represent a strong study based on an experimental design, but observations (including surveys) and knowledge analysis (literature research) were carried out according to scientific criteria.
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
Is prompting the new research? A summary
This study investigated the role of generative AI (GenAI) in science and its influence on the complexity competence of researchers. It was found that a targeted use and strategically developed prompting of GenAI can promote complexity competence among scientists, even if some publications mention the risk that creativity and innovative strength may suffer due to the dependence on and promoting convenient use of GenAI tools. Used correctly, GenAI can offer potential for increasing efficiency in various research areas, but can also promote challenges in terms of replicability, ethical implications and the need for interdisciplinary collaboration. Good and strategically sensible prompting can train analytical thinking, the multidimensional consideration of questions/tasks and the necessary factors for dealing with disasters. In addition, prompting can be designed in such a way that it promotes the flexibility required for complexity competence by specifically including experimental and iterative approaches as part of the prompting. On the other hand, the analysis results in Table 2 show that it is of great importance to understand how GenAI works and where its limits and possibilities are in order to obtain an appropriate result for an intended task. However, the presented examples make clear, that, in addition to adapted and effective prompting, suitable multimodal input tools can significantly enhance the process, especially when complex and extensive prompting texts have to be used. If these framework conditions are given, it is precisely the extensive prompting texts that represent an essential building block for the extended development of complexity competence. The creation of such extensive prompting texts requires good preparation in the form of an appropriate analytical and multi-perspective examination of the question or problem description. This kind of thinking process is also necessary when dealing with complex processes. Thus, the initial hypothesis can be regarded as confirmed subject to empirical evidence. Corresponding studies that empirically confirm the assumptions and observations presented are stil lacking, but adequate studies are already being in preparation by the authors.
- Review Article
- Summary
- The Initial Problem – Complexity Competence Does Not Follow The Increasing Complexity of The Present
- AI in science - curse or blessing?
- Increasing complexity of reality vs. complexity of information systems / AI / organizations
- Tomorrow’s research - tomorrow’s expertise?
- Use of GenAI in science - an approach
- Why correct prompting promotes complexity competence
- Does structured prompting help when dealing with complex conditions? An initial practical test.
- Scientific and Methodological basis
- Is prompting the new research? A summary
- References
References
- Zebua N (2024) Evaluating the Impact of GenAI in High School Education: A Critical Review. Polygon: Jurnal Ilmu Komputer dan Ilmu Pengetahuan Alam, pp. 79-86.
- Dexheimer JC (2017) Umgang mit Komplexität als Kompetenz am Arbeitsplatz: komplexes und kollaboratives Problemlösen. Heidelberg: Ruprecht-Karls-Universität Heidelberg (Dissertation).
- Danner D, Hagemann D, Schankin A, Hager M, Funke J (2011) Beyond IQ: A latent state-trait analysis of general intelligence, dynamic decision making, and implicit learning. Intelligence 39(5): 323-334.
- Armstrong McKay DI, Ariestaal, Jesse F Abrams, Ricarda Winkelmann, Boris Sakschewski, et al. (2022) Exceeding 1.5°C global warming could trigger multiple climate tipping points. Science 377(6611).
- Thomas R (2022) Tipping Points. Encyclopedia of Quality of Life and Well-Being Research. Cham: Springer.
- Bretschger L and Leuthard M (2024) Die Bedeutung von Kipppunkten für eine nachhaltige Entwicklung. Perspektiven der Wirtschaftspolitik. 24(4).
- Lenton TM, Hermann Held, Elmar Kriegler, Hans Joachim Schellnhuber, Stefan Rahmstorf, et al. (2008) Tipping elements in the Earth's climate system. Proceedings of the national Academy of Sciences 105(6): 1786-1793.
- Otto IM, Jonathan F Donges, Roger Cremades, Hans Joachim Schellnhuber, et al. (2020) Social tipping dynamics for stabilizing Earth’s climate by 2050. Proceedings of the National Academy of Sciences 117(5): 2354-2365.
- Zeppini P, Frenken K, Kupers R (2014) Thresholds models of technological transitions. Environmental Innovation and Societal Transitions. 11: 54-70.
- Clark JM (2024) The Extent of Project Management Competencies and Project Complexity on Project Success: A Correlational Study. Minneapolis: Capella University.
- Grohganz HG (2024) Beispiel Unternehmensplanung: Hohe Komplexität Verhindert Digitalisierung. Komplexität verstehen, beherrschen, gestalten: Denkanstöße für Manager und Unternehmer. Berlin, Heidelberg: Springer.
- Müller-Czygan G (2020) Smart Water - How to Master the future challenges of water management. Chandrasekaran PT, Javaid MS and Sadiq A. Resources of Water. Birmingham: Intech Open.
- Müller-Czygan G (2021) HELIP® and “Anyway Strategy” – Two Human-Centred Methods for Successful Digitalization in the Water Industry. Annals of Social Science & Management Studies.
- Lalmi A, Fernandes G and Souad SBA (2021) Conceptual hybrid project management model for construction projects. Procedia Computer Science. pp: 181.
- Kalimeris J, et al. (2022) Künstliche Intelligenz im Management: Chancen und Risiken von Künstlicher Intelligenz als Entscheidungsunterstützung. Harwardt M, et al. Praxisbeispiele der Digitalisierung: Trends, Best Practices und neue Geschäftsmodelle . Wiesbaden : Springer Fachmedien Wiesbaden.
- Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. Qadir, J. Kuwait : IEEE Global Engineering Education Conference 2023.
- Simkute, A., et al. Ironies of Generative AI: Understanding and Mitigating Productivity Loss in Human-AI Interaction. International Journal of Human–Computer Interaction. 2024.
- Prescott, M. R, et al. Comparing the efficacy and efficiency of human and generative AI: Qualitative thematic analyses. JMIR AI. 2024.
- Rosenberg L (2023) Why generative AI is more dangerous than you think. VentureBeat.
- Doshi AR and Hauser OP (2024) Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances.
- Liu H, Zhang X, Zhou J, Shou Y, Yin Y, et al. (2024) Cognitive styles and design performances in conceptual design collaboration with GenAI. International Journal of Technology and Design Education pp. 1-34.
- Yang X & Zhang M (2024) GenAI Distortion: The Effect of GenAI Fluency and Positive Affect. arXiv.
- Bühler MJ (2024) Accelerating Scientific Discovery with Generative Knowledge Extraction, Graph-Based Representation, and Multimodal Intelligent Graph Reasoning. Machine Learning: Science and Technology.
- Dwivedi D, Banerjee S (2024) GenAI-ing Ideas? Understanding Human-IS Collaboration for Creative Problem Solving. AI in Business and Society.
- Li J, Cao H, Lin L, Hou Y, Zhu R, et al. (2024) User Experience Design Professionals’ Perceptions of Generative Artificial Intelligence. arXiv.
- Prugger J, Jörg S, Reder M (2024) The Shift of Trust from Humans to Machines: An Extension of the Interpersonal Trust Paradigm in the Context of Artificial Intelligence. Journal of Practical Philosophy 11(1).
- Weitze MD and Heckl WM (2016) Vertrauen: Eine Art der Komplexitätsreduktion. Wissenschaftskommunikation-Schlüsselideen, Akteure, Fallbeispiele. Berlin, Heidelberg : Springer Spektrum, pp. 113-117.
- Moll G and Schütz J (2022) Wissenstransfer - Komplexitätsreduktion - Design. Bielefeld: wbv.
- Hübler M, Hübler M (2020) Nachwort: Die wahre Komplexitätsreduktion beginnt beim Menschen. Die Führungskraft als Mediator: Mit mediativen Kompetenzen souverän führen und Veränderungen begleiten. Wiesbaden: Springer Gabler, pp. 209-210.
- Mohajerzad H, Specht I (2021) Vertrauen in Wissenschaft als komplexes Konzept. [book auth.] G. Moll and J. Schütz. Wissenstransfer–Komplexit tsreduktion–Design. Bielefeld : wbv , pp. 31-49.
- Coester U, & Pohlmann N (2021) Vertrauen–ein elementarer Aspekt der digitalen Zukunft. Datenschutz und Datensicherheit-DuD 45: 120-125.
- Hauswald R (2024) Caveat usor: Vertrauen und epistemische Wachsamkeit gegenüber künstlicher Intelligenz. Zeitschrift für Praktische Philosophie 11(1).
- Gethmann CF (2022) Zur Frage der Ersetzbarkeit des Menschen durch KI in der Forschung. In: Getmann CF, et al. Künstliche Intelligenz in der Forschung - Neue Möglichkeiten und Herausforderungen für die Wissenschaf. Berlin, Heidelberg: Springer, pp. 43-78.
- Nitsch V & Buxmann P (2021) Auswirkungen von Digitalisierung und KI auf die wissenschaftliche Arbeit. Getmann CF, et al. Künstliche Intelligenz in der Forschung. Berlin, Hamburg: Springer, pp. 127-146.
- Coester U and Pohlmann N (2021) Vertrauen – ein elementarer Aspekt der digitalen Zukunft. Datenschutz und Datensicherheit.
- Lyer V (2023) Revolutionizing recruitment: the synergy of artificial intelligence and human resources. Review of Artificial Intelligence in Education.
- Chavan JD, Mankar CR and Patil VM (2024) Opportunities in Research for Generative Artificial Intelligence (GenAI), Challenges and Future Direction: A Study. International Research Journal of Engineering and Technology pp. 446-451.
- Decardi-Nelson B, et al. (2024) Generative AI and process systems engineering: The next frontier. Computers & Chemical Engineering. pp. 1-23.
- Meissner JO, Heike M and Sigrist D (2023) Organisation und Komplexität. Organisationsdesign in einer komplexen und instabilen Welt. Wiesbaden: Springer Gabler, pp. 13-34.
- Ruth M, Coelho D (2015) Understanding and managing the complexity of urban systems under climate change. Climate Policy. 7(4): 317-336.
- Chen L and Unsworth K (2019) Cognitive complexity increases climate change belief. Journal of Environmental Psychology 65: 101316.
- Timmermans W, López FÓ and Roggema R (2012) Complexity theory, spatial planning and adaptation to climate change. Roggma R (ed.), Swarming landscapes: The art of designing for climate adaptation. S.l. : Springer Dordrecht, pp. 43-65.
- Vogelpohl T and Feindt PH (2024) Transdisziplinäre Resilienzforschung für adaptive Wasser-Governance: Governance der Anpassung an den Klimawandel. Ökologisches Wirtschaften-Fachzeitschrift. 39(4): 20-21.
- Müller-Czygan G (2023) Eine Baumrigole macht noch keine Schwammstadt. wwt Modernisierungsreport, pp. 61-67.
- Müller-Czygan G, Manuela Wimmer, Julia Frank, Michael Schmidt, Viktoriya Tarasyuk, et al. (2023) Mindset Changes for Dealing with Complex Climate Impacts – Experiences with a Tool from Courses on Digitalization and Climate-Adapted Urban Development. Ann Soc Sci Manage Stud pp. 1-20.
- Inkeröinen J (2023) Towards complexity competence in environmental research governance. Oulu : UNIVERSITY OF OULU (Dissertation).
- Nyarirangwe M and Babatunde K (2019) Megaproject complexity attributes and competences: lessons from IT and construction projects. International Journal of Information Systems and Project Management 7(4): 77-99.
- Ahmadi Eftekhari N, et al. (2022) Project manager competencies for dealing with socio-technical complexity: a grounded theory construction. Systems.
- Koval V, Kushnir A, Vorona V, Balakirieva V, Moiseienko N, et al. (2023) Formation of future specialists research competence in the process of professional training. Amazonia Investiga pp. 77-86.
- Swank JM and Lambie GW (2016) Development of the Research Competencies Scale. Measurement and Evaluation in Counseling and Development 49(2): 91-108.
- Tuckman BW (1990) A proposal for improving the quality of published educational research. Educational Researcher 19(9): 22-25.
- Wester KL and Borders LD (2014) Research competencies in counseling: A Delphi study. Journal of Counseling & Development 92(4): 447-458.
- Vázquez-Rodríguez O (2024) Assessment of research competence in the educational field: an analysis of measurement instruments. Alteridad 19(2): 202-215.
- Chiu TK (2024) Zukünftige Forschungsempfehlungen für die Transformation der Hochschulbildung mit generativer KI. Computer und Bildung: Künstliche Intelligenz.
- Tao F, Xin Ma, Weiran Liu, Chenyuan Zhang, et al. (2024) Digital Engineering: State-of-the-art and perspectives. Digital Engineering pp. 1-20.
- Tapeh ATG and Naser MZ (2023) Artifcial Intelligence, Machine Learning, and Deep Learning . Archives of Computational Methods in Engineering 30: 115-159.
- Hu X, Liu A, and Dai Y (2024) Combining ChatGPT and knowledge graph for explainable machine learning-driven design: a case study. Journal of Engineering Design, pp. 1-23.
- Andersen JP, Lise Degn, Rachel Fishberg, Ebbe Krogh Graversen, Serge PJM Horbach, et al. (2024) Generative Artificial Intelligence (GenAI) in the research process–a survey of researchers’ practices and perceptions. Aarhus: Department of Political Science - Danish Centre for Studies in Research and Research Policy.
- Dai Y (2024) Why students use or not use generative AI: Student conceptions, concerns, and implications for engineering education. Digital Engineering.
- Perkins M and Roe J (2024) Generative AI Tools in Academic Research: Applications and Implications for Qualitative and Quantitative Research Methologies. arXiv.
- Chubb J, Cowling P and Reed D (2022) Speeding up to keep up: exploring the use of AI in the research process. AI & society pp. 1439-1457.
- Jönsson A (2024) Prompting for progression: How well can GenAI create a sense of progression in a set of multiple-choice questions?. Stockholm: KTH Royal Institute of Technology.
- Gethmann CF, et al. (2022)Auswirkungen von Digitalisierung und KI auf die wissenschaftliche Arbeit. Künstliche Intelligenz in der Forschung: Neue Möglichkeiten und Herausforderungen für die Wissenschaft,. Gethmann CF. Künstliche Intelligenz in der Forschung, Ethics of Science, pp. 127-146.
- Holmes W and Miao F (2023) Guidance for generative AI in education and research. Paris: UNESCO.
- A meta-analysis of the utility of explainable artificial intelligence in human-AI decision-making. Schemmer M, et al. (2022) Oxfort: Association for Computing Machinery , 2022. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society . pp. 617-626.
- Nachtweih J and Sureth A (2020) Sonderband ZUKUNFT DER ARBEIT - Human Resources Consulting Revie Band 12. Berlin : VQP.
- Mariani M and Dwivedi YK (2024) Generative artificial intelligence in innovation management: A preview of future research developments. Journal of Business Research.
- Yang A, Li Z and Li J (2024) Advancing GenAI Assisted Programming--A Comparative Study on Prompt Efficiency and Code Quality Between GPT-4 and GLM-4. arXiv.
- Kanbach DK, Louisa Heiduk, Georg Blueher, Maximilian Schreiter & Alexander Lahmann, et al. (2024) The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective. Review of Managerial Science 18: 1189-1220.
- Generative AI: A Literature Review on Business Value. Santos, G. and Sofia, V. Salt Lake City : IS/IT scholars (2024) AMCIS 2024 Proceedings.
- Generative artificial intelligence in learning analytics: Contextualising opportunities and challenges through the learning analytics cycle. Yan L, Martinez-Maldonado R and Gasevic D (2024) Kyoto : Association for Computing Machinery. Proceedings of the 14th Learning Analytics and Knowledge Conference. pp. 101-111.
- Robertson J, Caitlin Ferreira, Elsamari Botha, Kim Oosthuizen, et al. (2024) Game changers: A generative AI prompt protocol to enhance human-AI knowledge co-construction. Business Horizons 67(5): 499-510.
- Geißler H (2024) KI-Coaching Intelligenz im und für Coaching nutzen kann (in Vorbreitung). Wiesbaden: Springer Nature.
- Korzynski P, Grzegorz Mazurek, Pamela Krzypkowska, Artur Kurasinski, et al. (2023) Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies. Entrepreneurial Business and Economics Review, pp. 25-37.
- Ekin S (2023) Prompt Engineering For ChatGPT: A Quick Guide To Techniques, Tips, And Best Practice.
- Vera F (2024) Student Performance in Writing Prompts for Text-based GenAI tools in a Research Methodology Course. Transformar 5(2): 71-90.
- Bozkurt A (2024) Tell me your prompts and I will make them true: The alchemy of prompt engineering and generative AI. Oslo: International Council for Open and Distance Education.
- Schiersmann C and Hausner MB (2021) Komplexitätskompetenz als zentrales Element beraterischen Handelns–Bestandsaufnahme und Ausblick. In: Scharpf M and (Hg.) Frey A. Vom Individuum her denken. Bielefeld: wbv, pp. 71-83.
- Teicher I and Burges S (2024) Förderung der digitalen Resilienz in Unternehmen. Digitale Resilienz: Kernkompetenz für eine neue Arbeitswelt . Wiesbaden : Springer Fachmedien Wiesbaden pp. 27-33.
- Mußmann F (2013) Komplexe Natur—Komplexe Wissenschaft: Selbstorganisation, Chaos, Komplexität und der Durchbruch des Systemdenkens in den Naturwissenschaften. Wiesbaden: Springer.
- Reinert J, Wingen Martha, Klopries Elena-Maria, Schüttrumpf Holger, Dittmeier, Cordula, et al. (2023) Hochwasserwarnung: Lessons to Learn nach dem Julihochwasser 2021. Korrespondenz Wasserwirtschaft pp. 428-434.
- Schuh G, Krumm S and Amann W (2013) Navigation für Führungskräfte. Wiesbaden: Gabler.
- Peng X, Koch J, and Mackay WE (2024) Designprompt: Using multimodal interaction for design exploration with generative AI. Copenhagen: IT University Copenhagen. ACM Designing Interactive Systems Conference, Proceedings of the 2024 ACM Designing Interactive Systems Conference.
- Müller-Czygan G (2024) Digitizing Complex Tasks in Water Management with Multilevel Analysis. IntechOpen.
- Shao S, Kuo, Liang-Tseng, Huang Yen-Ta, Lai Pei-Chun, et al. (2023) Using Grading of Recommendations Assessment, Development, and Evaluation (GRADE) to rate the certainty of evidence of study outcomes from systematic reviews: A quick tutorial. Dermatologica Sinica 41(1): 3-7.
- Meerpohl JJ, Gero Langer, Matthias Perleth, Gerald Gartlehner, Angela Kaminski-Hartenthaler, et al. (2012) GRADE-Leitlinien: 3. Bewertung der Qualität der Evidenz (Vertrauen in die Effektschätzer). Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 106(6): 449-456.
- Stewart B (2023) GenAI on GenAI: Two Prompts for a Position Paper on What Educators Need to Know. Irish Journal of Technology Enhanced Learning. 7(2): 32-41.
- DeForest JL (2024) Mitigating generative AI inaccuracies in soil biology. Soil Biology and Biochemistry.
- Ronanki K, Cabrero-Daniel B and Horkoff J (2023) Requirements engineering using generative AI: prompts and prompting patterns. arXiv.
- Sherson J and Vinchon F (2024) Facilitating Human Feedback for GenAI Prompt Optimization. arXiv.
- Schulhoff S, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, et al. (2024) The prompt report: A systematic survey of prompting techniques. arXiv.
- Van Der Maden W, Evert Van Beek, Iohanna Nicenboim, Vera Van Der Burg, Peter Kun, et al. (2023) Towards a Design (Research) Framework with Generative AI. Companion Publication of the 2023 ACM Designing Interactive Systems Conference. pp. 107-109.
- Santana VFD (2024) Challenges and Opportunities for Responsible Prompting. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. pp. 1-4.
- Wu C, Chou CY and Tsai HP (2024) A Framework for PromptOps in GenAI Application Development Lifecycle. ICLR 2024. pp. 1-10.
- Tschopp M, Ruef M, Monett D (2022) Vertrauen Sie KI? Einblicke in das Thema Künstliche Intelligenz und warum Vertrauen eine Schlüsselrolle im Umgang mit neuen Technologien spielt. Kreativität und Innovation in Organisationen: Impulse aus Innovationsforschung, Management, Kunst und Psychologie, pp. 319-346.
- von Richthofen G, Ogolla S and Send H (2022) Adopting AI in the context of knowledge work: Empirical insights from German organizations. Information.
- Tschang FT and Almirall E (2021) Artificial intelligence as augmenting automation: Implications for employment. Academy of Management Perspectives 35(4): 642-659.
- Acypreste RD and Paraná E (2022) Artificial Intelligence and employment: a systematic review. Brazilian Journal of Political Economy 42(4): 1014-1032.
- Yu L, Li Y (2022) Artificial intelligence decision-making transparency and employees’ trust: The parallel multiple mediating effect of effectiveness and discomfort. Behavioral Sciences.
- Wulff K, Finnestrand H (2024) Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers. AI & Society. 39: 1843-1856.
- Ullrich A, Gergana Vladova, Felix Eigelshoven, André Renz (2022) Data mining of scientific research on artificial intelligence in teaching and administration in higher education institutions: a bibliometrics analysis and recommendation for future research. Discover Artificial Intelligence.
- Silva VJ, Bonacelli MBM and Pacheco CA (2024) Framing the effects of machine learning on science. AI & Society 39: 749-765.
- Funke J (2022) Be prepared for the complexities of the twenty-first century! [book auth.] R. J. Sternberg, D. Ambrose and S. Karami. The Palgrave Handbook of Transformational Giftedness for Education . Cham : Springer International Publishing, pp. 171-180.
- Empowering educators with generative AI: The genAI education frontier initiative. Meli K, Taouki J and Pantazatos D Palma (2024). EDULEARN24 Proceedings. pp. 4289-4299.
- Gkinko L and Elbanna A (2022) Hope, tolerance and empathy: employees' emotions when using an AI-enabled chatbot in a digitalised workplace. Information Technology & People. 35(6): 1714-1743.
- Araujo T, Helberger N, Kruikemeier S, de Vreese Claes, et al. (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & society 35: 611-623.
- Wang H, Zhang H, Chen Z, Zhu J, Zhang Y, et al. (2022) Influence of Artificial Intelligence and Robotics Awareness on Employee Creativity in the Hotel Industry. Frontiers in Psychology.
- Ross RL, Allison A Toth, Eric D Heggestad, George C Banks, et al. (2024) Trimming the fat: Identifying 15 underlying concepts from 26 in the social skills domain. Journal of Organizational Behavior.
- Leimeister JM and Blohm I (2022) Digitalization and the Future of Work. Die Unternehmung. pp. 1-5.
- Complex problem solving through human-AI collaboration: literature review on research contexts. Memmert, L. and Bittner, E. Hawai : s.n., 2022. Proceedings of the 55th Hawaii International Conference on System Sciences. pp. 378-389.
- Morley J, Kinsey L, Elhalal A, Garcia F, Ziosi M, et al. (2023) Operationalising AI ethics: barriers, enablers and next steps. AI & Society. 38: 411-423.
- Wan T, Chen Z (2024) Exploring generative AI assisted feedback writing for students’ written responses to a physics conceptual question with prompt engineering and few-shot learning. Physical Review Physics Education Research.
- Rogiers P, De Stobbeleir K, Viaene S (2021) Stretch yourself: Benefits and burdens of job crafting that goes beyond the job. Academy of Management Discoveries 7(3): 367-380.
- Roberts K (2024) Structured Prompts: TIPS from the Mastering GenAI Workshop with Josh Cavalier. Science2Practice 2(1): 33-39.
- Sindermann C, Yang H, Elhai DJ, Yang S, Quan L, et al. (2022) Acceptance and Fear of Artificial Intelligence: associations with personality in a German and a Chinese sample. Discover Psychology.
- Cvetkovic I, Bittner EA (2022) Task Delegability to AI: Evaluation of a Framework in a Knowledge Work Context. HICSS.
- Wang X, Lin X and Shao B (2023) Artificial intelligence changes the way we work: A close look at innovating with chatbots. Journal of the Association for Information Science and Technology 74(3): 339-353.
- Challenges and Opportunities for Prompt Management: Empirical Investigation of Text-based GenAI Users. Patkar N, Fedosov A and Kropp M (2024) Mensch und Computer 2024-Workshopband.
- de Sio FS (2024) Artificial Intelligence and the Future of Work: Mapping the Ethical Issues. The Journal of Ethics 28: 407-427.
- Langer M, et al. (2023) Trust in Artificial Intelligence: Comparing Trust Processes Between Human and Automated Trustees in Light of Unfair Bias. Journal of Business and Psychology 38: 493-508.
- Raj DJ AS (2024) GenAI and the Future of Education and Research. GenAI and the Future of Education and Research, p. 11.