CHAPTER TWO

Different from Discipline to Discipline

Diversity in the Scholarly Publication System

Konstanze Rosenbaum

In academia, publishing is of the utmost importance, and this in at least three ways. First, the publication is vital for the communication of new knowledge. Research results have to be published in order to be considered scientific (or scholarly) knowledge (Weingart 2003: 32). Second, the formal publication is a central part of the reward system of science, and serves as the foundation for attributing reputation. Third, mechanisms of external assessment of performance are also largely based on publications insofar as the measurement of performance is conducted via counting publications and citations. In the way science functions, publishing is an essential ingredient – in all disciplines. At the same time, however, there are significant differences within the various disciplines with regard to their cultures of publishing.

In the formal scientific communication system, homogeneity exists only in an abstract manner and refers to the functions of registration, certification, dissemination and archiving of new research.1 The present case study reconstructed the central differences of the publication system in seven disciplines on the basis of expert interviews.

The analysis is structured along four comparative dimensions. The first section compares the relationship of printed and digital publications in the individual disciplines, and shows influential factors for the respective states of digitisation in the scholarly publication system (section 2). In the course of digitisation, not only do the media of publication that are used change, but also the accessibility. The realisation of free accessibility and the extensive usability of publications are the most important developments within the system. The analysis of the differences is discussed in detail with respect to a certain model – free access to the original place of publication (gold open access) (section 3). Differences are found at the economic level and with regard to reputation. Subsequently, processes of self-monitoring of quality and of quantitative measurement of scientific performance were analysed. In section 4, the peer-review processes of the different disciplines will be compared and analysed with regard to their function of selecting contributions before publication. After that, the focus will be on the significance and perception of bibliometric measurement of performance (section 5). In a first step, the influence of bibliometric measures on the publication behaviour of researchers will be presented by using the example of the journal impact factor. Here, complementary to the mechanisms of the peer-review process, the selective function of impact factors in the context of publication activity, on the one hand, and the distributive decisions, on the other hand, will be worked out. The analysis is preceded by a brief description of the empirical material and methods of evaluation.

1 Materials and method

The main focus of this contribution is on the perspectives of scientists towards the communication system in their respective disciplines. In the framework of eight interviews, the members of the ‘Future of the Scholarly Publication System’ interdisciplinary working group and an invited contributor have gathered information on the characteristics and practices of the communication system in each discipline. The natural and engineering sciences are represented by experts from mathematics, physics and medical engineering. In the humanities and social sciences, two historians of science, one sociologist and one legal scholar were interviewed.

The interviews were conducted on the basis of a loosely structured guideline with which structural aspects of the formal communication system, on the one hand, and of the publication system as well as its responsible organisations, on the other hand, were revealed. Moreover, procedures of professional evaluation, performance measurement and accessibility to scientific information were taken into account. The transparent design of the interviews was chosen to provide experts with the opportunity to set different priorities and to explain the different facets of the scholarly communication system by means of the respective practical experiences (cf. Bogner et al. 2014: 12–15). Correspondingly, the evaluation was aimed at reconstructing the internal perspectives of different disciplines on the communication system and to elaborate the specific disciplinary differences by means of comparative dimensions. It is not claimed that the results are complete or that they can be generalised, however.

All interviews were transcribed, and all excerpts presented here were translated into English. The resulting amount of text formed the data material of the analysis. The computer-based qualitative content analysis was chosen as method of evaluation. The development of the system of categories was deductive as well as inductive. Analytical dimensions and main categories were derived from the interview guidelines. In comparison with the empirical material, further main categories could be added and sub-categories could be differentiated. Methodologically, techniques of thematic as well as summarising coding could be applied (cf. Kuckartz 2007: 83–96; Schreier 2012: 58–106).

2 The relationship between printed and digital publications

An initial and important comparative dimension is the relationship between printed and digital publication. As a result of the development and utilisation of digital information and communication technology, the scholarly communication system is subject to large dynamics of change. Mailing lists, email traffic and scientific Internet forums structure the social organisation of the exchange of information between scientists, and are used in the scientific communities to different degrees (cf. DFG 2005; Fry & Talja 2007). Along with the spreading of digital infrastructures, the format of the digital publication has been established, albeit to a very different degree. As a comparatively young form of publishing, the establishment and utilisation of digital formats in the scholarly publication system are inconsistent. The heterogeneity of digitisation within the communication system becomes clear in the interviews. Digital publication has a high status in disciplines that are characterised by a strong international orientation or by high technological standards of graphical description. In the natural and engineering sciences as well as history of art, scientists make use of technological opportunities of digitisation more strongly in order to design or disseminate their publications. In the humanities and social sciences, digital publications play a less important role. Indications for the reasons why digital publications are of varying importance in the different disciplines come to the fore in the interviews.

The indication of an interdependency between the importance of electronic publications and the type of medium of publication originates in the history of science. There, printed monographs and anthologies are of central importance. These ‘books of the normal scholarly production’ (H.-J. Rheinberger) are still primarily received in paper format, and e-books are uncommon. Review journals such as sehepunkte and the Berlin mailing list H-Soz-Kult are, however, published in digital form. These are important places of publication within the discipline, which are freely accessible in purely electronic form.2 In the journal sector, an electronic version appears in addition to the printed one. These publications are disseminated electronically by the publishers as a print version and via publication servers.

The state of digitisation in the publication system is, aside from the media of publication, also dependent on the performance abilities of the responsible organisations, in particular the publishing companies. In German-speaking sociology, the publisher Springer VS is a powerhouse. As a large publishing company with a central location in an otherwise fragmented landscape of publishers, Springer Verlag was easily able to take over platforms from science, technology, and medicine (STM) in order to provide digital products also within sociology. Smaller publishers frequently lack the resources to fulfil even minimum standards of their readership in terms of digital publications. Such developments have benefitted the creation of oligopolistic structures within the landscape of publishing companies.

Aside from publishing companies’ technological ability for innovation, the attitude of the respective discipline towards digital publication also plays a role. In the field of law, the landscape of publishers is characterised by decentralisation. Here Beck Verlag is the leader in the market. In contrast to mid-sized publishers, like Mohr Siebeck and De Gruyter, Beck’s status allows the company distribution of all digital products for money. E-books, however, have only been added to the portfolio of Beck-Online in recent years (cf. also Roxin 2009: 64). This hesitation in adding e-books correlates with the negative attitude of the scientific community towards digitisation as such.

In the history of art, there is a complementary relationship between printed and digital forms of publication. In this discipline, digitisation programmes and purely electronic publications were already developed and conceptualised at the beginning of the 1980s.3 At the same time, the form of the printed book remains indispensable for monographs or exhibition catalogues. The latter is a form of publication that not only addresses a broad public but also serves the exchange of research results within the discipline.4 Art history is a pictorial discipline (cf. also Boehm 2009: 62), whose publications are characterised by a special bonding between image and text. In most scientific disciplines, images additionally serve to illustrate connections between arguments that stem from theoretical or empirical work (for example, texts or in the laboratory). The history of art reverses this conventional relationship between image and text: ‘images come first, the texts need to try to illustrate them’ (H. Bredekamp). Aside from the quality of the image, factors such as choice of paper, density and complexity of the digital samples influence the outcome of a publication. In the printing process, authors are therefore strongly dependent on the printing and layout quality of the publisher and the competency of its designers. ‘It’s about providing the images with text without the reader having to turn the page [...]. If you have to turn back pages in a description, the description is gradually lost’ (H. Bredekamp). Epistemic reasons and resulting high technological standards regarding presentation thus explain a ‘unique standard in the art of book printing, which nowadays can be achieved through high-performance digital processes, but which cannot be well shown in digital form. Analogue high-performance books are produced by digital means’ (H. Bredekamp).

Digital publications can supplement or even replace printed formats. The latter is the case in the natural and engineering sciences. Here, the electronic journal article as a typical medium of publication is predominant and has almost entirely replaced the printed journals (cf. DFG 2005: 22–25). From the perspective of the researchers interviewed, digital publications are of significant advantage regarding reception, dissemination and archiving of research regardless of spatial boundaries.

The interviewee from medical engineering, for example, emphasised the efficiency of access to digital publications. Here, digital journals play an essential role. The university libraries acquire their licences in the form of bundle deals and provide researchers access via the internal university network:

It’s paradise and we sit at our desk... and we read a publication and there’s something in the reference list and I click on that and it’s there. [...] So that’s of course nice because it is important for research. It’s very important that someone does not first have to be sent somewhere... – and you wait three days until you have it. If you have it right away, then that’s fantastic for scientific work. (O. Dössel)

In mathematics, the access to articles is mainly via the electronic way; the utilisation of printed journals is rare. Only journals of scientific societies provide their members – in some cases also the editors and the editorial board – with printed copies.

Physicists prefer a communication culture of direct and informal exchange. Prior information can be passed on verbally at conferences or symposia. Their daily work is almost exclusively organised in working groups through which even young researchers are integrated into the community (cf. also Haug 2009: 97–98). Over the last approximately 20 years, this face-to-face communication has been supplemented through e-mail traffic and the electronic dissemination of preprints among peers. Against this background, digitisation in physics has led to another opportunity of circulating information, which is complementary between the informal verbal exchange and formal publication. Especially the easy saving of contributions in PDF format and their archiving on so-called ‘pre-print servers’ accelerate the pace with which research results can be disseminated (cf. also Fry & Talja 2007: 127). Moreover, manuscripts are nowadays much easier to produce or to revise while at the same time, the quality of colour figures has increased.

The results from the interviews illustrate that the state and characterisation of digitisation of the communication system are influenced by different factors. Epistemic factors – such as standards regarding graphical depiction and the relationship between image and text, such as described for the history of art, or the pace of scientific progress and the degree of competition about priority as in physics – play a role, as does the kind of media used for publication. This is clearly visible in the humanities, especially by the still high status of the monograph. Another influential factor is the responsible organisations of the publication system that, as has been reported from sociology, could limit the extent of digitisation in a discipline due to their restricted opportunities for technological innovation. Finally, there are also internal scientific factors on the level of normative attributions. A high degree of acceptance towards digitisation as in mathematics or aversion as in law, is also responsible for the differing extent of usage of digital publications.

3 Open access in the scholarly publication system

As a consequence of digitisation, there are new pathways of access to scientific publications, which, under the key term ‘open access’, have changed the traditional system of libraries and publishing companies and fee-based dissemination of printed works (cf. Andermann & Degkwitz 2004). Open access (OA) first of all means free accessibility to electronic scientific publications, but can also refer to primary and metadata, source texts or digital reproduction of images.5

Two factors are essential for open access: the practical implementation requires adequate infrastructures of Internet technology in order to create universal opportunities for access on the side of the recipients. Scholarly publications furthermore underlie the copyright which assigns a contribution to an author. In order to realise open access, the recipient needs to be granted extensive usability rights (cf. Andermann & Degkwitz 2004: 6–10).6

Open access can be realised in two fundamental ways: green OA refers to the creation of free accessibility to publications by depositing a version of the text in a repository or on a home page when it has already appeared in a different location with restricted access (cf. Lossau 2008).7

Gold OA, on the other hand, provides free accessibility at the original place of publication.8 The following description is limited to the ambitious model of gold OA, which aims to remove financial entry barriers to scientific knowledge in the traditional publishing world. In the interviews, gold OA is critically discussed from the perspective of researchers in their role as authors. Here, the focus is on financing models and reputation and its use.

3.1 The economics of gold open access and the financing of digital publications

Due to free accessibility, the incomes publishing companies usually get from sales of printed works or subscriptions disappear. As an alternative to subscription fees, therefore, costs of publication are transferred to the author or his or her institution through article processing charges (APCs). In some disciplines, ACPs are already institutionalised regardless of open access. Thus, in medical engineering or physics, publication fees for journal articles are the rule rather than the exception, regardless of whether an article is designated for OA journals or not. In the humanities and social sciences, publication fees for journals are not common but frequently occur for monographs or anthologies and are usually referred to as additional costs for printing.9 The financing of gold OA through APCs is viewed differently among the interviewees, ranging from pragmatic attitudes (medical engineering, physics) to rejection (mathematics) and even to outrage (history of science).10

Authors in OA publications in the humanities face financial hurdles. According to the objective of science policy, publication fees should be paid by the research institutions and third-party funders following a grant proposal by the researchers. For this purpose, German science organisations are creating funds (cf. Eppelin et al. 2012). According to an interviewee, the budgets of projects in the history of science are often too small to cover publication fees of gold OA. At the same time, third-party funders, such as the DFG or research institutions such as the Max Planck Society, increasingly demand that projects, which are financed by them, need to be published in OA journals. Such financial difficulties of the golden model are aggravated by the size of the publication fees. From the perspective of the interviewees, the demands on behalf of the publishing companies are unjustified, and are interpreted as a strategy to have the system of science subsidise the transfer to open access indirectly. Many researchers in the humanities are not able to pay this money and thus have not become active in gold OA. Instead, they try to ‘publish their work the way they can afford it, and that is in the best case green OA, text only in any case’ (M. Ash).

Moreover, there is scepticism regarding the value-added chain of scientific information that is organised via the successive efforts of authors, publishers and libraries (cf. Andermann & Degkwitz 2004: 7–10). While open access changes the function of publishers (dissemination) and libraries (archiving), the author as intellectual source of scientific information remains equally indispensable. In fields where publication fees are uncommon, such a model creates tension on the side of the authors. A historian of science states:

Whereas you ask yourself what this has to do with open access, that you should buy your own product,... that you should pay Springer [€] 2 000 for publication in a Springer-owned journal to put the piece online, yes, it ends there for me, I just don’t publish in these journals anymore. (H.-J. Rheinberger)

In physics and medical engineering, publication fees for open access are more often accepted. In these fields, APCs for journal articles are part of the normal conditions of scientific publishing. The collective organisation of the work processes enables physicists to divide the costs for publication: ‘Since we mostly work and publish in groups, there is always some group or someone who can raise the publication fees’ (S. Großmann). According to the interviewee, medical engineers prefer a pragmatic use of publication fees and orient their number of submissions to the third-party funds available per year. ‘I use the money I have. If I still have money in the DFG project that was designated for this, then I use it, I don’t want to let it expire’ (O. Dössel).

The interviews with representatives from medical engineering and mathematics illustrate that acceptance or rejection of financing models on the side of the authors also results from the normative structures of the disciplinary communities. The interviewee from medical engineering described a close bonding of his field to industrial practice, in which the professional careers of almost half of the professors began. In principle, they show a positive attitude towards entrepreneurial profit orientation and adapt the generation of economic profit as a guideline for their scientific activity. While the specific code ‘truth’ guides scientific communication, in medical engineering, processes of organisation and the continuity of research are also oriented towards the economic differentiation of payment or non-payment (cf. Luhmann 1984: 312–314):

Many colleagues of mine also view their store as one where profits are made, which in turn are of course invested in research. But that you don’t sell a research service at real costs, we don’t play such a zero-sum game, there’s no sense for us, then we only push money around in circles. (O. Dössel)

Setting the objective to gain ‘profit’ serves the stabilisation of solvency for future research.

From such an entrepreneurial perspective, the shifting of financing to the authors also seems rational. Summarising the interviewees’ attitude, one could say that professional services have their price, and service providers want to re-earn their production costs plus profit. That is a matter of fact in a market-economy.11 ‘Open access is not somehow cheaper because it is not printed’ (O. Dössel). The important difference between closed and open access is in the money flow through which the services of the publishers are financed. The established subscription models of the libraries are supplemented by the opportunity for the author to buy him- or herself into OA journals.

The mathematical community is diametrically opposed to any commercial capitalisation of its knowledge. In its self-description, mathematics is characterised by a strong cognitive and normative consistency, which is also characteristic of communication in the field (see also Gritzmann 2009). As normative foundation of the publication system, the interviewee pointed to two principles. ‘No author pays, no reader pays’ (M. Grötschel). There is consensus among mathematicians that scholarly knowledge is a public good that should be accessible to everybody at any time. The objective is to have the entire publication system be open access without demanding publication fees from the authors. For this purpose, mathematicians have been working on the ambitious project to develop a mathematical world library, the World Digital Mathematics Library, ‘which contains all mathematical articles of all time, electronically, classified, retrievable and searchable’ (M. Grötschel). The technological realisation is not so much a problem – ‘10 terabytes or so would be enough to store mathematics of all times’ (M. Grötschel). The difficulty rather lies in copyright issues and licencing conditions as well as funding. The aim is a far-reaching change of the system to a publication system that is free of charge open access. ‘I want a publication system where someone who does research and has finished a paper can submit it without having to pay for it, and where I can read it without having to pay for that’ (M. Grötschel).

This basic attitude does not only concern OA publications but refers to all forms of publication costs on the side of the authors. ‘Mathematicians do not want publication fees... and do not try to publish where these are required’

(M. Grötschel). This is based on the view that there is enough money in the entire publication system that needs to be redistributed so that readers and authors do not face costs. Such a model of true open access has, however, not yet been achieved.

A specific financial problem that affects the establishment of open access concerns the history of art with its primacy of visual culture. Art historians face specific challenges regarding the copyright and adequate citation of their visual sources. Obtaining copyrights for images is difficult, costs a lot of money and has become even more difficult in recent years. ‘It has been shown that digitisation of photographs has not promoted free accessibility, but created obstacles, so that you now need student assistants to have images added to books or articles’ (H. Bredekamp). In the United States, ‘exorbitant prices’

(H. Bredekamp) have to be paid for the reproduction of images in discursive contexts. Should these be introduced in Europe as a result of globalisation or new economic trade agreements, ‘you can pack up or you will need huge resources such as the Mellon Foundation that is tapped by everybody’ (H. Bredekamp). In view of the high expectations within the discipline regarding advantages of digitisation and open access, the results are disappointing so far. The fear is that it will get even worse.

In most interviews, the practical realisation of gold open access was noted in connection with APCs and other publication costs. Aspects relating to reputation in OA publishing and reception are elaborated and described in the next section.

3.2 The reputation of open access media

Electronic media increase the range of scientific information and opportunity on behalf of the authors to increase their visibility and thus gain reputation. Contributions in OA media should therefore, it can be assumed, accelerate and simplify the individual development of reputation. Reputation is not only nourished by advantages regarding citation but also by the reputation of the place of publication. This is precisely where OA journals are lagging behind (cf. Taubert 2010: 217). One mathematician traces this situation to the lowering of quality standards, which is partly the result of new and dubious publishers. These create gold OA journals, which they operate with minimal effort and without a sound review system. The publishers make profits via the APCs, which are mostly paid by the authors themselves, and publish articles that would not be accepted by journals with serious review systems. These so-called ‘predatory journals’ thus discredit the model of OA publishing (cf. the contribution by Peter Weingart on predatory journals in this volume).

The interviewees from sociology and physics pointed to different limitations of gaining reputation through OA publications. In sociology – a small discipline with a strongly fragmented landscape of specialised journals – OA journals are generally uncommon and even less viewed as places where one’s reputation could be improved, so that ‘as an author, you still shrug away from’ (U. Schimank). A different situation is described for physics. Here, a number of OA journals have been established – such as the New Journal of Physics – which also have a promoting effect on reputation. The most important journals, such as Physical Review and Physical Review Letters are, however, still financed via subscriptions and thus licensed with costs to the readers. In order to counteract cost-induced barriers, peers within the community practice an informal kind of enabling access: ‘We so to speak make a living by providing this thing for free, and when it is published and somebody needs it or asks about it, then he will receive the respective file from us’ (S. Großmann).

In law, acceptance and utilisation of open access is different. Open access practically does not play a role, and digital infrastructures such as repositories or recognised OA publishers are not well established.12 Structural specificities of the publication practice of the discipline influence the dissemination of open access. The publication system is basically financed through judicial practice, and not through universities. The publishers respond to that with different subscription packages that are adapted to the needs of the practitioners but these entail high costs.13 The interviewee reported that accessibility to relevant data and literature is not considered problematic. This is also due to the monopoly of Beck Online (in Germany) and the law portal Juris. ‘As long as you have access to these two large data banks and maybe three or four special journals that you need, you are basically satisfied’ (A. Peukert). For the most important medium of publication, the judicial commentary, authors are paid. These media have a relatively high number of copies since they address colleagues and students as well as practitioners, that is, law firms and courts. Royalties run to four or five digits and provide a significant income. These remain with the individual authors and are not returned to the system of science. Scholars of law thus not only generate intellectual capital through their publication practices but also monetary capital that would be lacking in OA publications (cf. also Taubert & Schön 2014: 79). OA publishing is also considered an irrational strategy due to another aspect. OA media cannot be cited and are ignored by peers, since ‘if texts are simply on the net, they are treated as being non-existent’ (A. Peukert).

Overall, it can be noted that, from the perspective of researchers, the openness towards OA initiatives differs in the disciplines studied. In physics and mathematics, open access has the most important role. Existing obstacles are dealt with by the peers in different ways. In physics and medical engineering, a pragmatic dealing with current conditions of gold OA publishing prevails. Mathematicians, on the other hand, point to a need for reform with respect to free accessibility and the establishment of reliable peer-review procedures.

The interview partners from the humanities had more reservations towards gold OA, and justified these with respect to the publication fees. In medical engineering and history of science, there are indications that the funding for publication provided by the responsible organisations differs according to the fields and thus influences the publication practice. Art historians, too, fear financial constraints due to changing regulations regarding copyright in the implementation of open access. Scholars of law have a different perspective. They do not consider publication fees problematic but the potential loss of royalties. The judicial field therefore proves to be an obstacle for the shift towards open access.

The interviewees from law and sociology mentioned reservations towards open access with respect to reputational aspects. The disciplinary culture of law is characterised by an aversion towards processes of change initiated by digitisation. In sociology, with few exceptions, the performance capacity of publishing companies in digital infrastructures is low. In both disciplines, OA publications are considered harmful to one’s own reputation.

4 Peer review

In the communication system, peer review is a key mechanism of steering science. In the review process, contributions or research projects – thus the ideal assumption – are subject to an independent evaluation which attests to the worthiness of the publication of a manuscript or the novelty of a planned research project.14 Peer review thus serves the selection of truth claims and the construction of progress of knowledge. Competent scientific experts decide about research proposals and contributions and therefore also about the allocation of chances of individual scientists, working groups or research institutions to obtain reputation and financial resources (cf. Luhmann 1974: 236–238; Neidhardt 2010: 281–282; Weingart 2005: 284–292).

The review process precedes the publication of a text or the approval of a project proposal. In this context, the extent of implementation and standardisation of mechanisms of evaluation differ between the disciplines. This finding was documented during the interviews and will be presented in the following sections for the natural and engineering sciences, on the one hand, and the humanities and social sciences, on the other.

4.1 Peer review in the natural and engineering sciences

In the natural and engineering sciences, institutionalised peer review is very common and refers especially to journals (cf. DFG 2005: 23–25). In medical engineering, parts of the conference proceedings are also subject to rigorous review, particularly in areas that are pertinent to information technology. In interviews with mathematicians, the assessment of reliability and capacity of the peer-review system is more positive than among physicists or medical engineers. For mathematics, two normative conditions that support the formal scientific communication system and influence the submission and accessibility of contributions on the level of the publication system have been mentioned.15 The normative structure can be completed by two additional principles: ‘Highquality archiving, high-quality refereeing’ (M. Grötschel). With respect to the further reception of research results, the latter is of fundamental importance. The reading of mathematical articles is arduous and time-consuming for peers as well. Selecting contributions in terms of their worthiness before publication saves potential recipients a significant amount of time.

Therefore, specialised review is of high importance to us, so that we are only confronted with articles that are of high quality and where a competent colleague has already evaluated that the content is okay and that you can rely on that. (M. Grötschel)

Thus, at 50–80%, rejection rates are high. The time of the review process is also not to be underestimated, as sometimes two years can pass between submission and publication.

The high quality of review does not provide an absolute but rather a widely acknowledged trustworthiness in scientific quality of the mathematical contributions. However, it also sets high standards regarding the competence of the reviewers and the willingness of the peers to participate voluntarily in the complex review process.16 To ensure motivation and quality of evaluation, editors of a journal often compile a team of reviewers comprising an experienced colleague and a doctoral student who currently works in the respective fields, ‘so that you have two perspectives, since the doctoral student is interested in reading the article because he might gain something from the content, and the one who has an overview can assess the contribution’ (M. Grötschel).

Other problems regarding the implementation of peer review and maintenance of its reliability are mentioned for physics and medical engineering. The massive increase of submissions, the trend to divide research results into ‘least publishable units’ and shorter half-life periods of research claims lead to excessive demands on reviewers.17 First, it is increasingly difficult to find reviewers at all, and second, they cannot always deal with the number of submitted contributions. In terms of time, the resources are not sufficient to read all contributions in detail, to test experimental claims or even to identify fraud. Administrative demands on young researchers, for example, a certain number of peer-reviewed publications among doctoral students, increase the overburdening of the review system.18 To ease the workload for editors as well as reviewers, new technologies – such as plagiarism software and online editorial management systems – are increasingly used. Digital networking and the establishment of databases make it easier for editors to search for appropriate and willing reviewers.

The interviewees also criticised the lack of incentives to do the review. A potentially negative balancing of cost-benefit calculation decreases the willingness to review and contributes to the fact that the ‘review system is not able to fulfil what we expect of it’ (S. Großmann). The operational capability of the system is based on the willingness of scientists to provide part of their work resources to test the intellectual property of others. This occurs against a structural connection, which one engineer described as follows:

We have a dramatically increasing number of submissions... but the high-quality contributions only increase marginally. And that is understandable since science and the quality of scientific research institutions in the world do not increase exponentially but slowly and linearly. (O. Dössel)

The reviewers now not only write reviews about the few notable contributions but have to report about all submissions, even ‘if they are no good’ (S. Großmann). In contrast to the practice in mathematics to create incentives to review through disciplinary interest as described above, in physics and medical engineering, there appears to be a correlation between an increasing volume of communication and increasing opportunity costs of the reviewers. As a result, reviewers are not selected because of their competence but because of their willingness. This also leads to the fact that journals

continue to go lower with the qualification of the reviewers, which is then a vicious cycle, since, if the reviewer is clueless and... thinks it is all fine, then, of course, a lot of articles can be published that just have no relevance for science. (O. Dössel)

If the scientific quality of publications goes down, the value of the peer-review system will eventually be put into question.

In both disciplines, the peer-review system faces a discrepancy between its claim for quality assurance and its practical capacity. To keep the state of knowledge up to date, there are forms of communication beyond formal publication structures in both fields. In medical engineering, the conference as place of interaction is gaining importance for the exchange of information: ‘It is considered to be faster since the classic publication system is a bit slower... with the review process’ (O. Dössel). In physics, aside from verbal exchange in the framework of conferences, private communications and informal dissemination of pre-print texts among peers are also common.19 The latter are usually sent out within the respective communities and critical feedback is received from peers before the reviewed printing. Research results are thus disseminated within the community before the work has been formally registered. Still, there are seldom conflicts about priority. The peers are in closed communities and know about each other and who works on which projects. Expectations regarding the honesty of the colleagues are ensured via informal mechanisms of sanctions and prove to be a functional equivalent for the performance of formalised peer review. ‘If a fundamental new finding is indeed discovered, then all involved know where it occurred... and if then someone says it differently then there is an ostracism in the community, so it is corrected’ (S. Großmann).20

4.2 Peer review in the humanities and social sciences

In contrast to the natural and engineering sciences, empirical examples from the history of science and art, sociology and law show a broader variety of publication media overall, and peer review covers a smaller range.

History of science is characterised by both interview partners as a classical humanities discipline with a small community compared to the natural sciences. The linguistic dichotomy is a structural characteristic of the communication system. English and German (or the respective national language) are used in parallel and cover different spaces of publication and reception. These are also reflected by the relevance of the different publication media. In German history of science, the monograph is predominant, ‘which you are responsible for and which you have written yourself’ (H.-J. Rheinberger).21 The evaluation of monographs depends on the place of publication. Especially regarding theses – relevant for access to the system of science – this is determined by economic and time-bound factors.

The evaluation of contributions in history of science is time-consuming and takes up to two years, even regarding submissions to highly respected journals, especially ‘if all editors read all texts, that takes time, as they are absolute experts and very busy’ (M. Ash). Rejections are rare. Usually there is a request for revision before printing. The specialised market of publication is fragmented to a strong degree and provides access to different price segments and speed regarding publication. This kind of landscape of publishers guarantees authors the publication of their work – provided that the payment of publication fees is secured. The selection of manuscripts is done by the funders and publishers. Some publishers are known to decide about worthiness of publication, not according to content but according to economic factors:

If you come with funding, you’ll be printed. Lit is a bit higher quality than Lang,[22] but if you want to publish fast, you know where to go and that actually presents a small dilemma. Younger researchers who are impatient go there because they want to publish and they are warned by us elders that this may not be the right thing for their reputation, but they don’t listen. (M. Ash)

In the United States, peer review is a prerequisite of quality assurance and is a standard used by publishers of journal articles and monographs. Meanwhile, there are also standardised peer-review processes in German journals. Both interview partners reported in this context about low rejection rates in the evaluation of journal articles. Rejections are mainly not due to lack of quality but due to the topic of the contribution, which does not always fit the scope of the journal. This development can be traced to differences in the respective publishing system:

What we have here in the German-speaking sphere... is what I would like to call a printing house mentality, i.e. the publishing companies are printing factories. As they were in the 16th century, they still are today with the help of state funds. Quality assurance does not play a role in such a situation, or at best a small role. Now it has to play a role because everybody talks about peer review. Thus, the publishers have begun to institutionalise this, but it would have never happened by itself, while in the USA the leading university presses and also the small university presses have had peer review for decades. (M. Ash)

According to its self-description, German legal studies are characterised by strong internal controversy due to conflicting legal interpretations23 and the duality of academic science and judicial practice. Quality control in this discipline is done by a small number of people. Evaluation of contributions is hardly standardised and conducted in part by judicial practitioners and in part by scholars. The evaluation is frequently done by ‘an editor, who is often a lawyer. Then it is often special journals where the lawyer has a relatively lot of expertise, he then makes a pre-selection, and then it goes back to the editors who make a decision’ (A. Peukert). The compilation of contributions into conference volumes is done by the speakers who participate after invitation by the organisers of the conference. In legal studies, personal networks are more important for developing reputation than the formal submission of contributions in reaction to calls for papers.

Structural similarities can be found in German-speaking sociology. In addition to the model of the deciding editor who, without assigning external reviewers, has the role of gatekeeper, standardised peer-review processes are partly institutionalised in journals. However, only about a third of new contributions are published as journal articles, of which again one third passes through the peer-review process. The typical place of publication in this discipline is anthologies, which are not subject to review before publication (cf. Volkmann et al. 2014: 203; Wissenschaftsrat 2008: 20–23).24 Demands for broad and standardised peer review are a reaction to the ‘flood of anthologies’ (U. Schimank) but have paradoxical effects:

The people first try it in journals. The journals have, however, not increased in volume or in numbers, and that means the pressure to publish more, of course more quality, leads to increased rejection rates and that you have to publish your rejected material somewhere else, and that’s the anthologies. This means, paradoxically, the pressure that was to move away from the anthologies, now moves into the anthologies. (U. Schimank)25

Sociology describes itself as a multi-paradigmatic discipline with a small community, which is fragmented into competing theoretical and methodological fields (cf. also Münch 2009). The affiliation with a specific sociological field influences the results of the peer review process and success in job interviews. This is especially true for sociological theory, ‘the most disrupted field in sociology’ (U. Schimank). In contrast to mathematics, basic paradigmatic controversies lead to a low cognitive integration of the discipline and can be destructive in the review process.

If you dare to submit such an article to a journal, then you can be sure that the two colleagues who should peer review it, belong to another camp and will tear it apart. Then you rather publish the things you consider original in anthologies where nobody gets in your business. (U. Schimank)

On the other hand, the discussion of knowledge claims may profit from scant peer-review coverage, above all if advancement of knowledge not only denotes accumulation of empirical findings but also includes innovative contributions that open up new pathways (cf. Weingart 2003: 25–26). Standardised review processes refer to pre-defined criteria (cf. DFG 2013), and are thus based on the existing state of knowledge of a discipline. As a result, peer review in sociology creates mainstreaming effects while media without formal evaluation provide ‘free space for unorthodox things’ (U. Schimank). In view of the diversity of paradigms, anthologies as media of publication show a functionality which ‘refers to the process of gaining knowledge in these fought-about fields even though it is clear that you cannot differentiate between original idea and nonsense any longer. The reader has to do that on his own then’ (U. Schimank).

The interviewee from the history of art also criticized standardisation and a lack of clarity as consequences of standardised peer review. At the time of the interview, his discipline operated in five languages. At the same time, the scientific community had a functioning global association whose communicative exchange made a broad peer-review process seem not only unnecessary but also as an ‘artificial, strange form of evaluation’ (H. Bredekamp). The community is unwilling to subject their publications to a standardised evaluation, ‘[p]eer reviewing is against quality if you take quality to be methodological avant-garde’ (H. Bredekamp). Here, a specific normative expectation to progress in knowledge is expressed in the history of art, which puts originality and deviation from the mainstream of scientific work in the forefront.

In mathematics, physics, and medical engineering, a relatively strong degree of cognitive homogeneity can be assumed due to the inherent structure of natural science knowledge (cf. Gritzmann 2009; Weingart 2003: 25–26). The example of mathematics additionally shows normative consensus in the scientific community. The self-steering function of peer review is of high importance in this discipline in order to select research contributions according to the criterion of scientific quality before publication. Problems emerge as a result of the high standards of quality that potential reviewers need to fulfil as well as the time required for the evaluations. In medical engineering and physics, the difficulties are in maintaining the reviewer system, especially in the dimension of time. Scientists respond to the high pace of new knowledge and the competition for priority with a high frequency of publications of journal articles, which overwhelms the resources available for evaluation.

In contrast to the natural and engineering sciences, the humanities and social sciences show a stronger heterogeneity, which corresponds to a comparatively low degree of institutionalisation of evaluation processes. The example of legal studies reveals an influential factor in the dual structure of the communication community. Within the two contexts of academic science and judicial practice, the processes of quality assurance are organised differently. Both have the low prominence and normative significance of peer-review processes in common, and thus a low number of potentially available reviewers. In history of science and sociology, the extent of institutionalised peer review depends on the medium of publication. Another aspect can be found in the evaluation system of history of science and history of art at the level of responsible organisations. Aside from the regional, financial and disciplinary variations in the publishers’ services, the relevance these organisations attribute to quality control is essential for the institutionalisation of peer review.

The interview partner from mathematics welcomed the selective function of the peer-review system, as the evaluation according to clearly defined criteria of quality ensures that irrelevant contributions do not appear in the formal publication system in the first place. The interviewees from sociology and history of art provided epistemic reasons against such a pre-selection of contributions. On the one hand, predefined criteria for evaluation do not differentiate enough between diverging paradigms, while on the other hand, they limit the freedom of research.

5 Bibliometric measuring

While contributions are evaluated by peer review with respect to qualitative criteria, bibliometric indicators formalise the process of receptive attention and depict effects of selection of scientific communication (cf. Marx 2009: 132– 133). Citation analysis and index numbers can be used to measure scientific productivity and performance. Two areas where performance indicators are applied were discussed controversially in the interviews: the orientation function of impact factors and their application in the framework of processes of allocation.

In the next section, the selective function of impact factors is evaluated from the perspective of scientists as producers of knowledge. Two points of reference have evolved as worth focusing on: the interview partners discussed performance indicators in general and the journal impact factor in particular in the contexts of individual reputation and quality of content. The discussion here again takes into consideration disciplinary as criteria of ordering to compare their heterogeneous positions. Unintended structural consequences at the level of the publication system are elaborated in this context as well.

5.1 The formalisation of reputation through the journal impact factor

Not only can reputation be attributed to individual scientists or working groups and scientific organisations, such as research institutions, but also to publishing companies and journals. A contribution in renowned media can then be considered an indicator of individual reputation. Highly reputed places of publication indicate the scientific recognition of those who have access to these places (cf. Luhmann 1974: 237–238 & 1992: 245–251; Weingart 2003: 22–35). In the publication system, the journal impact factor (JIF) is a standardised, quantitative measurement tool which can formally depict the impact of journals on the basis of citation analysis.26 The relevance of the journal article within the respective publication culture is essential for the JIF’s degree of institutionalisation. In the humanities and social sciences, impact factors are more often provided in international journals and are also weakly institutionalised (cf. International Mathematical Union 2008: 8; Nederhof 2006). Performance indicators are mainly used here in the framework of employment interviews.

In the natural sciences and technological disciplines, the JIF is common at the level of the publication system and bibliometrically depicts the hierarchy of publication media (cf. Marx 2009: 134).27 However, the adequacy of the impact factors is viewed differently in these disciplines. The interviewee from medical engineering viewed the JIF as significant in the strategic choice of place of publication. In this discipline, the JIF of renowned journals is between 1 and maximum 2, and ‘that’s the ambition of my doctoral students that they want to get in there’ (O. Dössel). The number of these top-ranked journals is small (approximately 10); most journals have a JIF of < 1 and ‘that’s where you go if it didn’t work out somewhere else’ (O. Dössel). In a positive as well as negative way, impact factors serve as points of reference of scientific quality of publication media. The consequence is that a journal with a JIF of 0.2 ‘is also not taken seriously among colleagues’ (O. Dössel).

The strong orientation function of impact factors also influences the development of digital infrastructures in the publication system. According to the interviewee, the potential use of reputation is clearly connected to the establishment of electronic search engines. A research result only enters the citation cycle ‘if it was placed with some publisher’ (O. Dössel). Informal places of publication, such as homepages, which are not listed in established search engines and citation databases are not used by peers. Bibliometric formalisation efforts, however, influence gold open access. Here, there is a correlative connection between the implementation of impact factors and the design of fees of OA publishers: while subscription fees of high-ranked journals, such as journals of the IEEE (Institute of Electrical and Electronics Engineers), have decreased, publication fees in the OA field increased with the respective JIF. For example, PLoS has gained a strong reputation. Its thematically specialised journals have high impact factors, but they also demand high APCs.28 From the perspective of the interviewee, APCs prove to be a good investment in the OA field, ‘We do that more often now, the trend clearly being that open access journals also have an impact factor, are officially listed and measured at Thomson Reuters’ (O. Dössel).

In physics, the impact factor also indicates reputation. One particularity here is the discrepancy between informal circulation of pre-print versions and formally completed works. Current contributions are usually discussed and used within the community in parallel with submission, so that the peer-reviewed published versions lose their character of novelty. The peers thus face a fundamental question:

Why do we still publish... if we have already disseminated it some other way. And my conclusion is that it is published mainly due to prestigious reasons and because of the proposals to third party funders. This may be a harsh accusation but I think it is like that because we already know everything when it appears, so why does it have to appear? (S. Großmann)

Formally registered publications no longer have a central function for the continuation of scientific knowledge production, but they can be cited. In physics, impact factors provide incentives for formally certified publishing. The standardised measures make comparisons of production outputs easier in a discipline in which ‘prestige and counts in publication lists’ (S. Großmann) have far-reaching influence on career and research opportunities.

In mathematics, the impact factors of journals also correlate with the hierarchy of the media of publication. According to the interviewee, peers do not, however, orient their publishing behaviour towards the results of scientometrics and are sceptical of the mechanical use of statistical measures (cf. International Mathematical Union 2008). The criticism thus is not aimed at the capability of bibliometric measures as such but at publication-based indicators as representative of scientific quality. Fundamental criticism is levelled at considering the database as objective. It is always distorted due to the citation behaviour of researchers. ‘They measure something but what is really measured? And can you in fact prove that they measure that which you think is being measured?’ (M. Grötschel).29 Complementary and negative citations as well as strategic citations create attention. Consequently, increased citation rates are not really a positive indication of scientific quality. Moreover, reward mechanisms such as prizes, which promise a nearly irreversible benefit for reputation (cf. Weingart & Winterhager 1984: 144), are not bound to impact points. The highest award in mathematics, the Fields Medal, has been awarded to persons whose citation numbers were lower by a factor of 100 than those of their competitors. In mathematics, people are cautious of using publication-based indicators outside of the contexts of calculation and application.

The interviewee from history of art also had epistemic doubts regarding the significance of performance indicators in general and the journal impact factor in particular. Bibliometric measurement procedures are based on a basic flaw in categories. Quality cannot be measured quantitatively, so performance indicators in general do not allow positive conclusions on quality. In addition, citation indicators can trace a diffuse picture of the effectiveness and visibility of research contributions, but their validity is methodologically tenuous due to irrelevant factors of influence in the social dimension. Thus, citation cartels and the informal obligation to cite gatekeepers have a distorting impact on the distribution of attention. Power cannot be entirely excluded from communication of research results in science.

In addition, citation rates are influenced by the assumed respectability of the place of publication. Impact factors may stabilise respective assumptions without necessarily connecting them to quality or progress of knowledge. On the contrary, normative expectations of the worthiness of citation of publication media can limit the freedom of scientific visions. Advancement of knowledge is promoted at a few places of publication on the Internet ‘which nobody cites, where the wildest, the freest theses are formulated. Everybody writes what they are not allowed to write when impact is involved and that’s where the show is’ (H. Bredekamp). In history of art, parallel infrastructures beyond the institutionalised criteria of evaluation emerge ‘which nobody is allowed to cite, but which can be more important than published arguments’ (H. Bredekamp).

The question whether ‘the informal is a sign of low quality’ (A. Peukert) is also at issue in studies of law, a discipline which, according to the interviewee, is structured controversially on the inside and autarchically on the outside. There is, however, consensus regarding the use of publication-based indicators. Quantitative evaluation mechanisms cannot create qualitative judgements ‘because you can’t measure that from the outside’ (A. Peukert). Within the community, the evaluation mechanisms and opportunities for participation are weakly formalised without having a negative impact on function. In printed media, the hierarchies are well known and are documented, especially in the choice of the type of publication. Addressing one’s own contributions could lead to prominence on the one hand or reputation on the other (cf. Weingart 2003: 26–28). ‘The closer you go to the daily practice in law, the lower, I would say, is the scholarly reputation of performance, and that’s where journals are structured differently, which degree of abstraction they allow and wish for’ (A. Peukert). The allocation of attention is determined by the place of publication: ‘Everybody goes to Beck Online and if a paper is not in there, then it is effectively invisible’ (A. Peukert).30

In contrast to the internationally received journals in the natural and engineering sciences, the journals in the German-speaking humanities and social sciences are listed to a much smaller degree in the Thomson Reuters citation databases (cf. Hornbostel et al. 2009: 19–27). The interviewee from sociology illustrated this finding by means of citation rates in the Social Science Citation Index (SSCI). The two most important US journals, American Journal of Sociology and American Sociological Review, have about 5 000 citations per year. In contrast, the most important German journal, the Kölner Zeitschrift für Soziologie und Sozialpsychologie, only has about 250 citations per year. Such discrepancies indicate a dubious validity of the SSCI in the social sciences. The citation index of Google Scholar is a bit better but does not provide reliable reference values due to a lack of transparency. ‘We do not know what Google Scholar measures and how they do that; they don’t tell us’ (U. Schimank).31

Another methodological reservation results from discipline-specific publication habits. In the history of science, some journals have an impact factor whose validity is already limited due to the comparatively low rates of publications within the field. The peers are aware of the informally valid hierarchy of publishers and professional societies. For example, there is consensus among authors as well as editors that the journal ISIS is at the top of the renowned places of publication, ‘regardless of whether one associates it with an impact factor or not’ (H.-J. Rheinberger).32 Impact factors are therefore an addendum that neither provides the peers nor the responsible organisations with additional information. ‘Everybody knows who they are’ (M. Ash).

5.2 The practical relevance of performance indicators for allocative decisions

The evaluation of the interviews pointed to the different degrees of institutionalisation of impact factors in the individual disciplines. The interviewees differed in their opinions about the advantages and disadvantages for their respective fields. In all interviews there were, however, indications that, from the perspective of scientists, performance indicators represent a ‘measurement from outside’. Thus, these are external evaluations that are adapted and implemented to different degrees in the scientific communities. The different degrees of practical relevance of performance indicators can be shown by means of evaluations of proposals and employment interviews.

Performance indicators suggest a simple handling of distributive decisions as they abstract from specialised knowledge and offer standardised evaluation criteria which are parallel to elaborate peer-review processes. Performance indicators thus increasingly serve as an instrument to make and legitimise allocative decisions (cf. Weingart & Winterhager 1984: 18–23). The interviewees from the humanities and sociology rejected this instrumental function of external evaluation procedures. One argument focused on the discipline-specific landscape of publication, which serves a variety of different types of publication and is insufficiently registered in citation indices.

As the interviewee from legal studies reported, journals in his field do not have an impact factor, so that research organisations have to depend on the inside knowledge of their reviewers and qualitative evaluation criteria. From the outside perspective of administration, which often evaluates research proposals from different disciplines or interdisciplinary working groups, this makes it more difficult to compare research output. Reviewers from other disciplines do not have insight into the informally organised hierarchy of places of publication. The opportunities to classify publication lists in legal studies adequately are generally not given due to the use of interdisciplinary reviewers. ‘The legal scholars hope that there is at least one of them in this group who will explain, if necessary, to the others what these kinds of media are’ (A. Peukert). Meanwhile, there is pressure from the side of the responsible organisations to ‘introduce formalised procedures and achieve rankings and to signal that this is conducted seriously’ (A. Peukert).

Interviewees from history of science, sociology and history of art also mentioned administrative efforts to quantify science and research. The historians of science revealed a coherent opinion and seemed unwilling to use any form of evaluation. ‘I am also surprised but history still seems to reject this kind of thinking’ (M. Ash). Impact factors are considered disruptive in recruitment interviews and as not having any relevance. The European Reference Index for the Humanities (ERIH), created in 2002 by the European Science Foundation as a citation index of European humanities and revised many times since, is ‘simply not noticed’ in the scientific community (M. Ash).33

The interview partner from sociology described the handling of impact factors in the framework of recruitment interviews in a more heterogeneous way. Young researchers ascribe a lot of importance to their accumulated impact points and list their publications according to formal evaluation mechanisms. ‘First they list the contributions in international peer-reviewed journals, then national peer-reviewed journals, sometimes with impact factor, where you have them and then comes the rest, the crappy rest’ (U. Schimank). The interviewee did not, however, ascribe a legitimising function to the impact factors.34 Instead, he made it clear that their use for distributive decisions suggests loss of reputation. ‘There are even audacious colleagues who take this seriously, because in our field, you can’t take that seriously’ (U. Schimank). A similar effect, although not motivated epistemically, was stated by the interviewee from history of art. ‘Those who start mentioning the impact factor hardly have a say’ (H. Bredekamp). Under the primacy of methodological avant-garde performance, indicators can explicitly turn out to be a negative criterion of selection.

The interviewee from mathematics emphasised the risks of using publication-based indicators insofar as these replace scientific truth and guide allocative decisions (cf. Luhmann 1974: 237). The accumulation of high impact numbers in publication lists is not a central recruitment criterion. Rather, the respective performances in publishing are evaluated individually and in their context. The interviewee feared that the institutionalisation of performance indicators could lead to a bureaucratic and meaningless administration of career opportunities. ‘We just don’t want to have an evaluation mechanism that calculates the h-index and other indicators and then automatically assigns scientists to a certain category of quality’ (M. Grötschel).35 One criticism is aimed at the reference size of the journal impact factor which measures the overall impact of the journal but not that of the individual contributions (cf. Marx 2009). Authors with less-cited contributions could then falsely take the credit – due to the success of other authors. Similar to the principle of high-quality reviewing before publication, the evaluation of individual scientists before recruitment is not possible without the expertise of competent peers or ‘the individual assessment of the person and his or her performance’ (M. Grötschel).

While the interviewees from mathematics, law and history of science strongly criticized the reduction to a quantitative performance measurement, it is precisely this that makes the journal impact factor attractive, according to a medical engineer: ‘It is the only thing they can really count’ (O. Dössel). In the context of recruitment procedures, performance indicators provide a standardised criterion of evaluation, which makes it easier to compare research output. Aside from other, soft factors, such as the evaluation of the topic, the median impact value of an applicant is ‘one point among many, which can easily be measured and is therefore significant’ (O. Dössel). Performance indicators, such as the impact factor, do not function as an exclusive criterion of selection, but are part of further decisions in evaluation. According to the interviewee from physics, cost-benefit calculations as well as a lack of alternatives also contribute to the use of performance indicators in recruitment procedures. ‘Checking the publication lists in detail is no longer possible because of the sheer mass of publications in the lists, and that’s why we almost always end up with this bibliometric indicator’ (S. Großmann). Adaptations on behalf of the scientists – publishing new research results in small units and in rapid succession to gain impact points – influence the structure of the publication system. ‘It’s definitely that way that publishing in general has followed external measurability’ (S. Großmann).

From the perspective of scientists as producers of knowledge, the possibilities and limitations of bibliometric measuring are viewed differentially. Impact factors can – insofar as they are considered a metric reflection of the hierarchy of media of publication – make selection processes of suitable places of publication easier and reduce the overhead costs of science (cf. Luhmann 1992: 248–251). As the examples from medical engineering and physics show, in a rapidly growing publication system, the journal impact factor proves to be a functional equivalent of experiential knowledge about reputation. At the same time, high impact values, in connection with quality assumptions, indicate use for reputation and become established in the structure of motivation of science (for example, via the certification function of ranked journals). In both disciplines, the orientation towards performance indicators influences individual publication behaviour as well as choices of selection in recruitment procedures. The application of publication-based indicators enables a standardised measurement of research performances and comparison. In cases of high numbers of applicants, it is also a shortcut to evaluate publication lists. In contrast to such pragmatic advantages, in mathematics, there are more reservations regarding a widespread use of performance indicators. Methodological problems of calculation and the general loss of contextual information in quantitative indicators lead to a restricted use of performance indicators in mathematics.

In the humanities and social sciences, performance indicators are hardly or only weakly institutionalised due to the small and fragmented publication landscape in these fields. Methodological aspects, for example, the lower coverage of publication types or distorting effects due to citation behaviours, limit the validity of publication-based indicators. Moreover, epistemic reasons, such as the categorical distinction between quality and quantity, strengthen the mostly negative attitude of scientists towards the use of performance indicators.

6 Conclusion

The results of this case study illustrate that differences with respect to time frames, contents and social organisation in the various disciplines constitute specific publication behaviour. These have effects on the structure of the publication system as well as the development and interaction of responsible organisations. Taking into account current dynamics of change, such as digitisation, economisation and intensified observation of scientific productivity from outside, structural connections of the system of science in the respective disciplinary contexts can be seen. For example, there is a fundamental connection between the variety of publication types used and the requirements towards the presentation and reception of research results. Aside from epistemic factors, the latter determine the different relevance of digital and analogue forms of publication. Differences of the practical relevance of digitisation moreover have an effect on the dissemination of open access. Further influential factors lie in the financing models of gold open access and the attitudes of the scientific community towards publication fees, on the one hand, and expectations towards costs and benefits of OA publishing, on the other.

Aside from different mechanisms of scientific publishing that refer to one another, and which influence the development of digitisation and open access, the empirical material also provided insight into the steering function of peer review and bibliometric performance measurement. In the natural and engineering sciences, evaluation mechanisms – qualitative peer reviewing and quantitative performance measurement – are in general more strongly institutionalised than in the humanities and social sciences. Due to feedback effects, there are changes in the publication system of these disciplines. Such changes can be seen, for example, in the preferred types of publications or in the increasing frequency of publications in small units. The medium of reputation turned out to be a significant dimension that directly influences the publication behaviour of the peers. Changes that, as in the case of gold open access or bibliometric performance measurement, concern the publication infrastructure, in turn affect the incentive structures of scientific publishing. The analysis has shown that the scientific communication system appears to consist of diverse, mutually influencing factors.

References

Andermann, H. & Degkwitz, A. 2004. Neue Ansätze in der wissenschaftlichen Informationsversorgung: Ein Überblick. Historical Social Research, 29(1):6–55. Retrieved from http://nbn-resolving.de/urn:nbn:de:0168-ssoar-50509 [Accessed 13 January 2015].

Antelmann, K. 2006. Self-archiving practice and the influence of publisher policies in the social sciences. Learned Publishing, 19(2):85–95. Retrieved from http://www.ingentaconnect.com/content/alpsp/lp/2006/00000019/00000002/art00002 [Accessed 4 March 2015].

Boehm, G. 2009. Publikationsverhalten in der Kunstgeschichte/Kunstwissenschaft. In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 62–63.

Bogner, A., Littig, B. & Menz, W. 2014. Interviews mit Experten: Eine praxisorientierte Einführung. Wiesbaden: Springer.

Bourke, P. & Butler, L. 1996. Publication types, citation rates and evaluation. Scientometrics, 37(3):473–494.

Chang, Y.-W. 2013. A comparison of citation contexts between natural sciences and social sciences and humanities. Scientometrics, 96:535–553.

DFG (Deutsche Forschungsgemeinschaft). 2005a. Publikationsstrategien im Wandel? Ergebnisse einer Umfrage zum Publikations- und Rezeptionsverhalten unter besonderer Berücksichtigung von Open Access. Tabellenband. Retrieved from http://www.dfg.de/download/pdf/dfg_im_profil/evaluation_statistik/programm_evaluation/studie_publikationsstrategien_tabellenband.pdf [Accessed 2 January 2015].

DFG (Deutsche Forschungsgemeinschaft). 2013. Sicherung guter wissenschaftlicher Praxis. Denkschrift. Retrieved from http://www.dfg.de/download/pdf/dfg_im_profil/reden_stellungnahmen/download/empfehlung_wiss_praxis_1310.pdf [Accessed 3 March 2015].

Eppelin, A., Pampel, H., Bandilla, W. & Kacmirek, L. 2012. Umgang mit Open-Access-Publikationsgebühren – die Situation in Deutschland 2010. GMS Medizin – Bibliothek – Information, 12(1/2):1–12. Retrieved from http://www.egms.de/static/en/journals/mbi/2012-12/mbi000240.shtml [Accessed 25 February 2015].

Fry, J. & Talja, S. 2007. The intellectual and social organization of academic fields and the shaping of digital resources. Journal of Information Science, 33(2):115–133.

Gargouri, Y., Larivière, V., Gingras, Y. & Harnad, S. 2012. Green and gold open access percentages and growth, by discipline. arXiv. Retrieved from http://arxiv.org/abs/1206.3664 [Accessed 23 March 2015].

Gritzmann, P. 2009. Publikationsverhalten in der Mathematik. In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 82–83.

Haug, R.J. 2009. Publikationsverhalten in der Festkörperphysik. In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 95–98.

Havemann, F. 2009. Einführung in die Bibliometrie. Retrieved from http://www.wissenschaftsforschung.de/Havemann2009Bibliometrie.pdf [Accessed 17 March 2015].

Hirsch, J.E. 2005. An index to quantify an individual’s scientific research output. In Proceedings of the National Academy of Sciences of the United States of America, 102(46):16569–16572.

Hornbostel, S., Klingsporn, B. & Von Ins, M. 2009. Messung von Forschungsleistungen – eine Vermessenheit? In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 14–34.

IMU (International Mathematical Union). 2008. Citation statistics. Retrieved from http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf [Accessed 4 March 2015].

Kuckartz, U. 2007. Einführung in die computergestützte Analyse qualitativer Daten. Wiesbaden: VS Verlag für Sozialwissenschaften.

Lossau, N. 2008. Der Begriff ‘Open Access’. In Der Deutschen UNESCO-Kommission (ed.). Open Access. Chancen und Herausforderungen. Ein Handbuch. Köln: Gebrüder Kopp, 18–22.

Luhmann, N. 1968. Vertrauen. Ein Mechanismus der Reduktion sozialer Komplexität. Stuttgart: Ferdinand Enke.

Luhmann, N. 1974. Selbststeuerung der Wissenschaft. In Soziologische Aufklärung. Aufsätze zur Theorie sozialer Systeme. (Ders.) Opladen: Westdeutscher, 232–252.

Luhmann, N. 1984. Die Wirtschaft der Gesellschaft als autopoietisches System. Zeitschrift für Soziologie, 13(4):308–327.

Luhmann, N. 1992. Die Wissenschaft der Gesellschaft. Frankfurt am Main: Suhrkamp.

Mantz, R. 2006. Open Access-Lizenzen und Rechtsübertragung bei Open Access-Werken. In G. Spindler (ed.). Rechtliche Rahmenbedingungen von Open Access-Publikationen. Göttingen: Universitätsverlag Göttingen, 55–103.

Marx, W. 2009. Forschungsbewertung auf der Basis von Zitierungen – Aussagekraft und Grenzen der Methode. In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 132–155.

Münch, R. 2009. Publikationsverhalten in der Soziologie. In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 69–77.

Nederhof, A.J. 2006. Bibliometric monitoring of research performance in the social sciences and the humanities: A review. Scientometrics, 66(1):81–100.

Neidhardt, F. 2006. Fehlerquellen und Fehlerkontrollen in den Begutachtungssystemen der Wissenschaft. In S. Hornbostel & D. Simon (eds). Wieviel (In-) Transparenz ist notwendig? Peer review revisited. iFQ working paper no. 1, 7–13.

Neidhardt, F. 2010. Selbststeuerung der Wissenschaft: Peer Review. In D. Simon, A. Knie & S. Hornbostel (eds). Handbuch Wissenschaftspolitik. Wiesbaden: VS Verlag für Sozialwissenschaften, 280–292.

Roxin, C. 2009. Publikationsverhalten im Bereich der Jurisprudenz. In Alexander von Humboldt Stiftung (ed.). Publikationsverhalten in unterschiedlichen Disziplinen: Beiträge zur Beurteilung von Forschungsleistungen. Second edition. Diskussionspapiere der Alexander von Humboldt Stiftung, 12/2009. Bonn, 64–66.

Schreier, M. 2012. Qualitative content analysis in practice. Thousand Oaks, CA: Sage.

Swan, A. 2007. Open access and the progress of science. The American Scientist, 95:198–200.

Taubert, N. 2010. Open access. In D. Simon, A. Knie & S. Hornbostel (eds). Handbuch Wissenschaftspolitik. Wiesbaden: VS Verlag für Sozialwissenschaften, 310–321.

Taubert, N. & Schön, K. 2014. Online-Konsultation ‘Publikationssystem’. Dokumentation und Auswertung. Retrieved from http://edoc.bbaw.de/volltexte/2014/2629/pdf/BBAW_Publikationssystem_Taubert.pdf [Accessed 8 January 2015].

Volkmann, U., Schimank, U. & Rost, M. 2014. Two worlds of academic publishing: Chemistry and German sociology in comparison. Minerva, 52:187–212.

Weingart, P. 2003. Wissenschaftssoziologie. Bielefeld: Transcript.

Weingart, P. 2005. Die Stunde der Wahrheit? Zum Verhältnis der Wissenschaft zu Politik, Wirtschaft und Medien in der Wissensgesellschaft. Weilerswist: Velbrück Wissenschaft.

Weingart, P. & Winterhager, M. 1984. Die Vermessung der Forschung. Theorie und Praxis der Wissenschaftsindikatoren. Frankfurt: Campus.

Weller, C. 2004. Beobachtungen wissenschaftlicher Selbstkontrolle. Qualität, Schwächen und die Zukunft des Peer Review-Verfahrens. Zeitschrift für Internationale Beziehungen, 11(2):365–394.

Wissenschaftsrat. 2008. Pilotstudie Forschungsrating Soziologie. Abschlussbericht der Bewertungsgruppe. Retrieved from http://www.wissenschaftsrat.de/download/Forschungsrating/Dokumente/Grundlegende%20Dokumente%20zum%20Forschungsrating/8422-08.pdf [Accessed 2 March 2015].

1 The present contribution is based on the conceptual understanding of the Academy’s ‘Future of the Scholarly Publication System’ interdisciplinary working group. See the contribution by Taubert and Weingart in this volume for details on the concepts publication system, publication infrastructure, responsible organisations.

2 See http://www.sehepunkte.de/ and http://www.hsozkult.de/.

3 Example of the Census of the Antique Works of Art and Architecture Known in the Renaissance, freely accessible at http://www.census.de/.

4 Exhibition catalogues document an increasing connection between universities and the world of museums. Exhibitions produce new states of research and, as ‘[a]cademies for a time’ (H. Bredekamp), influence the steering of content of research (cf. Boehm 2009: 63).

5 On the definition of open access, see the contribution by Ball in this volume. Cf. also http://www.budapestopenaccessinitiative.org/read. Two other public declarations from 2003 have supplemented the development of fundamental principles and goals of OA: the Berlin Declaration (cf. http://openaccess.mpg.de/Berlin-Declaration) and the Bethesda Statement in Open Access Publishing of 2003 (http://legacy.earlham.edu/~peters/fos/bethesda_ger.htm).

6 Copyright and rights of usability are defined in specific licence agreements, for example, via Creative Commons or the Digital Peer Publishing Licenses (cf. Mantz 2006). See the contribution by Peukert and Sonnenberg in this volume.

7 In the English language publication system, the proportion of green OA has increased significantly. Swan (2007) found an increase of 26 percentage points between 2004 and 2005 and concludes that almost half of the scientists participate in the self-archiving of their contributions (cf. Swan 2007: 200). Pioneers are mathematics and physics, initiating a repository in 1991 with arXiv, which currently contains 1 014 771 e-prints from physics, mathematics and related disciplines (cf. http://arxiv.org/). Researchers from the humanities and social sciences are less active in self-archiving (cf. Antelmann 2006; Gargouri, Larivière, Gingras & Harnad 2012).

8 The proportion of gold OA has increased in the past 10 years. The Directory of Open Access Journals currently lists 10 254 journals whose articles are all freely accessible (cf. http://doaj.org/). In German-speaking countries, 20.1% and 17.6% of journal contributions in the natural and engineering sciences, respectively, were freely accessible at the original place of publication in 2005. In the humanities and social sciences, it is much lower at 5.9%, for monographs only 2.7% (cf. DFG 2005: 45).

9 According to the study of the DFG cited above, 21.1% of engineers and 46% of natural scientists have had to pay for publication in a specialised journal. In the humanities and social sciences, the proportion is much lower at 7.2% (cf. DFG 2005: 21).

10 Specific cost problems in history of art are discussed at the end of this section. There are no data in the interviews on sociology and law that would enable a judgement on the attitudes in these disciplines.

11 At the non-profit organisation PLoS (Public Library of Science, http://www.plos.org/), the price is currently USD 1 350 to 2 900 per article (cf. http://www.plos.org/publications/publication-fees/). Depending on the journal, Springer Verlag demands between USD 665 or € 500 and USD 1 996 or € 1 575 per article submission in the Moving Wall Model (cf. http://www.springeropen.com/about/apcfaq/howmuch) and USD 3 000 or € 2 200 in the Open Choice Model (cf. http://www.springer.com/gp/open-access/springer-open-choice).

12 In the English-speaking sphere, open access infrastructures are more common (cf. also http://openaccess.net/de/oa_in_verschiedenen_faechern/rechtswissenschaft/).

13 Beck-Online is, according to A. Peukert, a well-functioning database. In addition, the law portal Juris offers different subscriptions, the most common one (juris professional) costs € 1 200 per year (cf. http://www.juris.de/jportal/nav/produkte/juris_produkte/jurisprofessionell/produktuebersicht_professionell.jsp). These data banks ‘are subscribed to by many law firms and companies. That is a very big business’ (A. Peukert).

14 Difficulties of philosophy of science to establish and stick to verifiable and measurable criteria have been discussed in the literature and shall not be repeated here (cf. for example, Neidhardt 2006; Weller 2004). The focus here is on practical experience and attitudes of the interviewees towards peer review in their respective disciplines.

15 See 3.1.

16 This is also true regarding the establishment and maintenance of databases in which a representative amount of test data for different issues is collected. The interviewee in this context points to a basic consensus about the norms of ‘high-quality archiving’ (M. Grötschel) and a strong willingness among peers to provide voluntary services of quality assurance.

17 Hornbostelet al. (2009) estimate a half-life period of 5.2 to 6.9 years for different areas of physics (cf. Hornbostel et al. 2009: 28). According to a study by the DFG in 2005, publication rates in natural and engineering sciences are at 21.8 and 17.6 journal articles per author respectively (including co authorship) for a period of five years (cf. DFG 2005: 22–25).

18 The DFG therefore recommends not demanding a minimum but a maximum number of publications in the framework of applications (cf. DFG 2013: 20–21).

19 In this connection, the technological opportunities of digitisation are essential.

20 Informal knowledge about colleagues – ‘you just know what has already been done’ (S. Großmann) and consensual normative expectations make it possible to reduce the complexity of the scientific communication system by means of the mechanism of trust (cf. Luhmann 1968: 21–29).

21 Its reach is limited to the German-speaking sphere; international reception requires additional publication in English.

22 Lit and Lang are German publishers mostly publishing dissertations.

23 This pertains to the dogmatically oriented continental European jurisprudence. In the Anglo-Saxon world, a social science perspective of law prevails.

24 According to the interviewee, monographs and anthologies are only reviewed in German-speaking sociology and on special occasions.

25 A similar effect was described by the interviewee from medical engineering (see section 3.2).

26 The journal impact factor is calculated as the number of citations in the year of reference to all articles of the previous two years divided by the number of all articles in the previous two years (cf. Havemann 2009: 49; Hornbostel et al. 2009: 28–29).

27 The data on bibliometric analysis in the natural and technological sciences are primarily based on the science citation index (SCI) of Thomson Reuters (cf. http://wokinfo.com/citationconnection/).

28 IEEE, the Institute of Electrical and Electronics Engineers (https://www.ieee.org/) publishes several journals on the basis of the subscription model. Members of the IEEE have cheap access to high-ranked journals. PloS is an established OA medium in the natural and technological sciences that is financed through publication fees.

29 The interviewee also mentioned the difficulties of calculating impact factors in a valid manner and to standardise them for comparison of disciplines. This issue is discussed extensively in the scientometric literature (cf. for example, Bourke & Butler 1996; Chang 2013; Nederhof 2006).

30 Visibility by publishing at Beck is restricted to the German-speaking sphere. In the English-speaking field, there are repositories, for example, the Social Science Research Network (cf. http://www.ssrn.com/).

31 Google Scholar is currently limited to articles from 2009 to 2013. Moreover, the data pool of the source items is unclear. Google itself notes, ‘Since Google Scholar indexes articles from a large number of websites, we can’t always tell in which journal a particular article has been published’ (cf. http://scholar.google.de/intl/de/scholar/metrics.html#coverage).

32 ISIS was founded in 1912, and is the oldest and most disseminated English journal of history of science (cf. http://www.press.uchicago.edu/ucp/journals/journal/isis.html).

33 Cf. http://www.esf.org/media-centre/ext-single-news/article/european-science-foundation-releases-the-2011-revised-lists-of-european-research-index-for-humanitie.html and http://www.esf.org/index.php?id=4813.

34 See section 5.1.

35 The Hirsch Index (h-index) surveys the performance of individual persons on the basis of the number and citation of published works (cf. Hirsch 2005).