Common errors and challenges in publishing in a peer refereed Library and Information Journal

This article discusses common errors emanating from authors submitting manuscripts or papers for publication in peer refereed Library and Information journals. It is hoped that this paper will provide established, novice and potential scholarly journal authors with valuable information enabling the improvement of their manuscripts before submission for publication. The paper primarily uses the author's experience as editor-in-chief of a peer refereed accredited LISjournal, among other related experiences, aswell as85 peer reviewer reports on submitted manuscripts to SouthAfricanJournal of Librariesand InformationScience,to analyse and discuss common errors made by authors on submitted manuscripts for publication, and the challenges facing these authors.


I Introduction
The aim of this article is to discuss, by using experiential knowledge, related studies and samples of anonymous reviewer reports, common errors in manuscripts submitted by authors for publication in a peer refereed Library and Information Journal.The paper's structure consists of: an introduction that presents the problem concept and the aim and objectives of the paper; section two examines scholarly journals and peer reviews; section three presents the method and procedure applied whilst producing this paper; section four presents the results/findings emanating from the analysis of peer reviewer reports on sampled manuscripts submitted to South African journal of Libraries and Information Science from 2002 -2006; section five discusses results; and a final section' prE;sentsconclusions and recommendations.
Publication of research findings is a fundamental aspect of research dissemination and knowledge sharing processes, and such publications often go through a number of stages before they appear in the public domain for wider circulation and readership.Authors of research papers come from different backgrounds, scholarly traditions and writing dispositions.One of the aspirations of scholarly publishing is publication in top peer refereed scholarly journals, normally of international standing as outlined later.Peer refereeing is common practice amongst scholars, whereby research output undergoes thorough evaluation by peers mostly in the same research domain or discipline.This is in order to determine or vet the quality of output in terms of originality, relevance/significance/contribution to knowledge, methodology, awareness of research in the domain through the review of related studies, and readability, among others.Thus, peer reviews are important quality control mechanisms used by the scholarly community and most scholarly journals to establish the suitability of a manuscript for publication in a journal.Put another way, 'no analysis of research publishing can avoid underlining the critical role of editing and peer review in the maintenance of the global system of knowledge production: accumulation and use' (Pouris 2006:xiv).Ultimately, as a measure of quality control, peer reviewing (both content and form review) is still strongly favoured by scholarly journals.In South Africa's Policy and Procedures for Measurement of Research Output of Public Higher Education Institutions, (see http://education.pwv.gov.za/content/document/307.pdf:6)produced by the Department of Education under recognised research output, journals are required to comply with a list of seven minimum criteria in order to be eligible for inclusion in the list of approved journals.Among them, also relevant to this paper, are that 'The purpose of the journal must be to disseminate research results and the content must support high level learning, teaching and research in the relevant subject area; Articles accepted for publication in the journal must be peer reviewed; the journal must have an editorial board that includes members beyond a single institution and is reflective of expertise in the relevant subject area'.Considerable similarities exist between the above criteria and the inclusion requirements of journals in the Thompson Scientific -lSI journals indexes (see http://www.isinet.com)and the National University Commission of Nigeria (see Mabawonku 2005:21).The exceptional publication demands of accredited peer reviewed journals perhaps help explain why South Africa, which occupies a leading position in research publishing in Africa, is rated amongst the lowest producers of research publications in the World.A recent report by disseminate new research findings or ideas.The publication of a paper establishes precedents in the formation of new knowledge.and it puts new information in the professional domain where it can be scrutinized.criticized and either accepted or rejected.It may then contribute to further discourse.The author also makes personal gains by adding to a list of publications that can be used for tenure and promotion, for gaining professional acceptance that may lead to speaking engagement, consultancy work.perhaps even awards'.Put another way.Murray in Stilwell (2006:7) summarises the reasons as follows: career progression, or moving up to the next rung on the ladder, gaining recognition for work done, preventing others from taking credit for one's work or using one's materials.helping one's students gain recognition for their work, learning higher starldards of writing, contributing to knowledge, building one's institution's status and developing a profile.Other reasons worth mentioning could also potentially be: justification for funding by an individual, department or institution; for tenure or permanent appointment; gratification, or boosting one's ego through recognition; community practice and incentive.As is generally known, in South Africa, research publications in pre-listed journals are generously rewarded through government subsidies amounting to approximately (as figures rise regularly) US$ I 1820 or R.85 I00 paid out to the institution of the author's affiliation for each qualifying article published in the pre-listed journal.In turn, this determines the formula for resource allocation to the contributors  Sci 2007,73(1) Since academic or scholarly journals are the main conveyors of knowledge or research output, they often undergo rigorous evaluation leading to their ranking both or either nationally and internationally.One of the quantitative measures that has received strong international support, but also criticism, for categorizing, comparing , evaluating and ranking journals was developed by Eugene Garfield in Stilwell (2006:4) of lSI, which is now Thompson Scientific.It is based on the Journal Impact Factor UIF), which relies on evaluating the impact of citation frequencies of articles in a journal (ClF).The Citation Impact Factor (ClF), proposed by Eugene Garfield in 1969(Garfield, 1996:41 I), is defined as the average number of citations in a given year of articles published in a journal in the preceding two years.The ratio is obtained by dividing citations received in one year' by papers published in the two previous years (see also Onyancha and Ocholla: 2006:4).

SAJnl Libs & Info
Garfield in Stilwell (2006:3) reasons that such evaluation reduces bias by ensuring that 'large journals over small ones, or frequently issued journa/~over less frequently issued ones, and older journals over newer ones' are not favoured by evaluations and rankings.Critics of quantitative measures such as Gorman (2000), Calvert and Gorman(2002), while recognizing traditional ways of measuring journal 'qualities' as opposed to quantitative indicators such as 'circulation, total number of pages per volume, number of times cited in the literature, coverage by indexing services, etc.' (Calvert and Gorman 2002: I) argue in favour of using largely qualitative measures.They reason that 'The fact that paper x is cited y times is not an indicator of quality, but rather that it is cited -it is available, it is in a journal held by many libraries, the author(or publisher or editor) is particularly good at self-promotion' (Calvert and Gorman 2002:3).Where a journal is indexed (e.g lSI indexes), the level of circulation and citation frequency also playa role.Journal rankings seem to receive strong support as projected by several authors cited by Stilwell (2006:3-5), such as Harnad, Carr, Brody and Oppenheim.Peer review is listed as one of the qualifying factors.

Peer review
Peer review has a history that extends over more than three hundred years of learned inquiry, acting as a traditional instrument of quality control involving screening intellectual output for quality, reliability and credibility.Peers are credible scholars or qualified adjudicators in a discipline or subject domain that scholars or journals rely upon for views or comments on the content suitability of a manuscript for publication in a scholarly or academic journal.The process of this 'review' service, in the form of comments to the journal editor and/or author's attention, is popularly called 'peer review'.It is built on the premise that research output (articles, monographs, research reports, patents, etc) would earn more credibility, be more accepted, contribute more towards a society or discipline, command more respect and be more reliable if peers (experts in the discipline) vet its quality by scrutinizing, screening and evaluating its content and format.The latter is checked for theoretical soundness, originality, significance and contribution to knowledge, upon which it is recommended for publication or dissemination to the scholarly community through mainstream academic or scholarly journals.Peer review, therefore, should generally improve the quality of research output, improve the standard of scholarly communication, protect the public/scholarly community from unreliable or invalid information or knowledge and safeguard the reputation and recognition of individuals, affiliate institutions and academic journals.Although peer review is widely used for determining the quality of publication in journals, it is also liable to weaknesses.Most of these weaknesses are intellectual -insufficient knowledge in the subject domain, moral or psychological bias, sociological (distance from context and political) arrogance and ignorance, among others.However, it is recognized that quality control is fallible; peer review is therefore not exceptional.Strong critics of peer review, such as Tipler (2003) when referring ~o and analyzing cases involving prominent discoveries in science such as 'Copernicus's heliocentric system, Galileo's mechanics, Isaac Newton's grand synthesis and Charles Darwin's evolution theory' as well as highly respected Nobel prize winning papers (such as Albert Einstein's), argue that 'today, the peer reff7reeing process works primarily to enforce orthodoxy' and offer 'evidence that 'peer' review is not peer review: the referee is quite often not as intellectually able as the author whose work he judges.We have pygmies standing in judgment' on giants' (Tipler 2003:2).However, Tipler does compromise by proposing that 'leading journals in all branches of science establish a 'two-tier' system.The first tier is the usual referee system.The new tier will consist of publishing a paper in the journal automatically if the paper is submitted with [a] letter from several leading experts in the field, 'this paper should be published' (Tipler 2003: 10).That, in my opinion, still leads us back to peer review.Equally intriguing, but fairly constructive and sometimes subversive debate on this issue, is offered by Steven Harnad (see: http://www.princeton.edu/-harnad/intpub.html, http://cogsci.soton.ac.uk!-harnad/ intpub.html).In one of his many seminal articles on peer review (Harnad, 1998: paragraph one), he argues that journals should not be free from the 'process of peer review, whose "invisible hand" is what maintains its quality'.

Peer review process
Peer reviewers are expected to be competent and credible scholars in order to be sufficiently eligible to participate in a review process.Gorman (2000: I0 I) for example, identifies three aspects of good reviewers, that of competent researcher, objective assessor and comparative evaluator.Although there are variations on peer review processes from journal to journal, there exist strong similarities concerning manuscript flow from author to editor to reviewer.For instance, SAJLlSprocesses involve 9 steps as outlined by Ocholla (2006: 18).The refereeing procedure is as follows: • Authors normally inform the Editor-in-Chief of their intention to publish in a journal and receive consent to post or e-mail the manuscripts to the Editor.• Manuscripts are received bye-mail and/or post according to the journal's publication guidelines.
• Authors receive acknowledgment from the Editor.
• The Editor-in-Chief verifies manuscripts for suitability for publication in the journal.
• Suitable manuscripts are e-mailed to Journal reviewers (normally consisting of LIS scholars of standing, members of the Editorial Advisory Board, the Journal Management Team and others identified by their expertise and publication profile.).Through [double] blind reviews the referees evaluate the manuscripts, for a duration not exceeding one month, before sending them back to the Editor-in-Chief.At least two reviewers must evaluate each article.The Reviewers' Evaluation Form is enclosed with each manuscript for the reviewer's guidance.• Both accepted and rejected manuscripts are e-mailed to the author(s), with a full but relevant report by the reviewers (authors do not have to know who reviewed their manuscripts) • Authors make corrections and e-mail their final document to the Editor if manuscript is accepted • The Editor, after verification, sends the manuscript to the Publisher.
• Publication is normally expected within the specified dates [30th March, June and December].It is SAJLlS policy to encourage and support LIS authors.However, in order to improve the quality of publications, manuscripts that are unanimously recommended by at least two reviewers for substantive revision or rejection may not be published.Although guidelines are important for guiding reviewers, most journals do not provide them, as is the case with LIS journals in Nigeria (Mabawonku, 2005).
A large part of this process is outlined graphically in Figure One.

Nature and type of review
The nature, type and level of review is normally outlined in instructions to reviewers which are sent to the reviewers together or separately from the manuscript.Thus, the tasks of editors and reviewers are clearly spelt out in order to guide the review process and avoid inconsistency or avoidable bias.Should the criteria of evaluating submitted manuscripts be uniform?Reviewers are normally required to evaluate and rate the manuscripts and either recommend them for publication: -without [any] corrections, with minor corrections, with substantial corrections that may demand a complete revision of the manuscript and a follow-up review -or reject them.In most instances,reviews are required to determine or judge the quality of the manuscript in terms of theoretical and methodological validity, originality, significanceand contribution, and readability.Tipler (2003:2) outlines three criteria informing judgement, stating them as the validity of the claims made in the paper, originality of the work or whether similar work hasalready been done, and 'whether the work.even if correct and original, is sufficiently 'important' to be worth publishing in the journal'.Gorman citing Gorman (2000: 102-103), for example, identifies six criteria for assessingsubmissions to Asian LIS Journals as: the advancement of knowledge, new information or data; theoretical validity (use of appropriate theory or multiple theories); level of scholarship (quality of analysisand author's ability to generate new knowledge); acceptable research design and appropriate methodology and analysis that assists referees in establishing levels of 'contribution in terms of knowledge or information conveyed'; originality of the contribution; and the soundnessof the methodology, findings and structure.

Common errors in Scholarly (LIS) Journals
Errors do not necessarily occur only during the preparatory phase of publication but at early stages of research design.
Mistakes that occur during the preparation of LIStheses and dissertations by students, as discussed by Kaniki (2000), are frequently carried on to the preparation and submission of manuscripts for publication in an LIS peer refereed journal.Hinchliffe (2003:3) advisesthat 'thinking about your final manuscript begins when you start thinking about your project'.In her view, this includes: searching or reviewing literature and placing the project in context; choosing a topic and determining the relevancy of the topic; manuscript and component organization; and technical preparation (proof reading, typographical errors, and adherence to the requirements provided by publishers in their Guide to Authors, etc).Smarby, Crews and Downing (1999), citing Dies, Henson and McGowen, identify the following as areas under which technical writing errors are made by aspiring authors: selecting topics to write about; describing research methods; following the American Psychological Association (APA) format; citing related research; using the appropriate writing style; and responding productively to feedback on manuscripts from editors.Reporting on studies that focus on determining the degree of association between selected variables on the acceptance or rejection of manuscripts basedon a sample of 180 manuscripts submitted during 1997to 1998 in 'Counselor Education and Supervisor' (CES), the three authors show that over 50% with scores exceeding a 70% rejection rate emanated from: weak critiques of relevant research studies; disorganized literature reviews; poor use and description of statistical analyses;poor presentation and choice of research procedures, descriptions of instruments or choice of research procedures; illogical conclusions drawn from results reported in their manuscripts; impracticality in the description of implications in.their manuscripts; poor description and integration of relevant directions for future research; and improper.use of the APA style.Errors also occur when individuals choose the wrong journal for a manuscript.Searing(2003:~)'ad;"ises.th~tit is important to find out whether or not the journal is peer-reviewed and whether the journal is prestigious (Le; 'very'choosy), and assessthe journal's audience.Foster (2003:5) states that a good manuscript is created ~hEmthe auth~r is surrounded by current and concise references, the manuscript is repeatedly revised, the paper is welf edited and proof read, instructions to authors are familiarized thus establishing finer submission requirements, the manuscript is read by others for comments, and the paper is accurately submitted.It is important to review recent issuesof the journal in order to be in line with their latest requirements.
An interesting study conducted by Fischer (2004) for the Journal of Management Issues UMI), based on a reviewers' report data summary covering 1989 -1991(N = 68) and 1994-2003 (N = 217), noted 3 of the most frequently cited errors by referees as: significance of contribution (e.g.findings are of little value/interest), and methodological and conceptual rigor.Other issues that contribute toward errors, in descending order, based on 'completely inadequate' and 'major problems' ratings combined in this study were: discussion of results, length/contribution ratio, treatment of relevant literature, contribution (with revision), clarity of objectives, readability, and logical organization.An editor, according to Fischer,functions as a 'gatekeeper' that ascertainsthe suitability of a paper for publication in a journal, or separates what he calls 'wheat from chaff', using the following criteria: the paper does not fit the journal's editorial mission, the submission is poorly written, the use of out-of-date literature, inadequate levels of scholarship (no academic rigoropinion, no validation of viewpoints) and unwieldy writing (e.g overly complex, poorly organized, etc).& Info Sci 2007, 73(I) 3 Method and procedure This paper uses the author's experience as editor-in-chief of a peer refereed accredited LIS journal, among other experience (e.g author, reviewer etc), as well as 85 randomly selected peer reviewers' reports on submitted manuscripts to South African Journal of Libraries and Information Science, to analyze and discuss the common errors made by authors on submitted manuscripts for publication and the challenges facing these authors.Three sets of data capturing sheets were created in Excel.The first sheet was populated with quantitative scores generated from the Reviewer's Assessment Form (see Appendix I) which captured scores on originality, significance and contribution, organization, methodology (where applicable), literature review and language/readability by allocating Likert Scale measures of 5 (excellent), 4 (good), 3 (marginal/fair), 2 (poor), and I (very poor/rejected).Scores from the 85 documents were then tabulated and transcribed into charts as reflected in Figures 2-7.The second sheet was created through a content analysis of the reviewers' textual/ qualitative remarks.Key concepts were derived from the remarks and itemized or listed for frequency analysis.The list was sorted alphabetically and the frequencies of concepts recounted and verified in order to determine the most common errors identified by the authors as listed in Table I.The paper is informed by existing studies and literature on scholarly publications/journals and peer reviews, as well as the author's experiential knowledge in scholarly publishing.

Results
This section focuses on information obtained through an analysis of the Reviews Assessment form normally posted to each reviewer with the manuscript for assessment.The first section (see Figures 2-8) provides cumulative scores made by 85 reviews based on six indicators.The second section (see Table I) reports on qualitative evaluation reports obtained through the content analysis of the reviewers' reports.

Originality
Originality refers to the quantity and quality (novelty) of contribution made by the author to the content of the document or manuscript.A five point Likert scale, represented in Figure 2, was used to measure this variable.Notably, a large number (45% of 85 and 34% of 85) of reviewers scored Good and Fair respectively with accord to originality.The Very Poor and Poor scores combined made up less than 20 %, suggesting originality was generally considered fair to good.

D Reject (1)
ill Poor ( 2) 4.2 Significance and contribution Significance and contribution are not necessarily the same concept.A manuscript may be significant in terms of how relevant or important the research topic, theme or domain is, but the content may contribute little towards LISresearch, suggesting weak or inappropriate research and inadequate presentation rigor.Thus, contribution refers to the impact a manuscript possesses in LIS research.Questions relating to the latter include: are there fresh contributions to LIS knowledge in terms of critical evaluation of related studies or in the formativeness of the review; is the methodology robust and replicable; are the results valid and relevant; is the discussion analytic and evaluative; and are the conclusions and recommendations valid?In this category, the reviewers scored both Poor and Fair,with a few scores in favor of Good as reflected.in Figure 3.In essence, the reviewers' rating of contribution was low.However, regarding significance, while Excellent, Poor and Very Poor received insignificant scores, Fair and Good received over 70% of the scores, with most reviewers rating the significance of the manuscript Good.Organization may also entail formatting, coherence and writing style.Organization (Figure 4) scored Fair (29%) to Good (36%), meaning errors were moderate.

Readability
One challenge of disseminating research lies in writing clearly and correctly, thus enabling readers to access and understand an article's content.Readability, language use and accessibility are used interchangeably in order to define how well a manuscript has been written.Not all scholarly authors have mastered this quality, as the command of a dominant language of communication varies amongst people and is most particularly problematic to those using a non-vernacular language for scholarly communication.As reflected in Figure 6, readability scores are split between negative and positive, suggesting they were generally fair.It was observed by this author that most manuscripts were written by either established authors or established and novice authors in co-authorship.Authors also try in most cases to get their articles proof read before submitting them for publishing in order to meet the requirements normally demanded by journal editors.This author concurs with Fisher (2004) and Hernon (2003:6) that journal editors normally edit manuscripts before sending them back to reviewers as part of their 'gate keeping' role.Authors also.seekhelp from editors in order to improve the readability of their papers before they are submitted to reviewers.
------1 Literature review is essential, though not all manuscripts may strongly reflect this aspect.There is, however, a unanimous concurrence by authors cited in this paper (e.g.Gorman 2002, Kaniki 2002, Maddux and Liu 2005, etc) that literature review is essential for any research as it informs the manuscript, and affirms that the author is familiar with the research field and able to interrogate and evaluate it correctly and competently in support of his/her work.The level of literature review varies according to the nature of the manuscript, which could be a research paper, case study, conceptual paper, technical paper, literature review, etc., an aspect that perhaps explains why it (see Figure 7) was not necessarily well represented.

Discussions
The first set of scores represented in Figures 2-8, which focus on originality, significance and contribution, organization, methodology readability and literature review show moderate errors in the six categories.For example, originality scored between fair and good.Very few (I I%) manuscripts scored excellent (meaning error free) or very poor/rejected (5%) for originality.Secondly, 51% of the manuscripts were weak (sharing fair, poor and very poor) in terms of contribution.Surprisingly, contribution scores were considerably less (10; 11.8%) in the qualitative analysis of errors (Table I) based on reviewers' remarks.Scores emanating from quantitative analysis do, however, concur with studies conducted by Smaby, Crews andDowning (1999), andFischer (2004) who identified contribution as one of the top errors made by scholarly authors.Organization scored 65%, when fair( 29%) and good (36%) were combined, which reflects positive judgment by reviewers.However, in the detailed qualitative reports (Table I), organization or presentation scored second highest (61; 71.8%) with regard to common errors experienced by authors.In related studies reported by Smaby, Crews and Downing (1999), organization is rated 2 nd from top of eight errors, while a study by Fischer (2004) rated organization loth (last) of ten errors.Although significant variations could exist on the weight of organization as an error made by authors, organization features enough in this study to be considered an error worthy of attention.
Methodology is almost proportionately rated good (27%), fair (25%) and poor (29%) in the quantitative scores of reviewers (Fig. 4), meaning that more errors were found in this category.This is reiterated in the scores based on qualitative reports in Table I where methodology is rated first (62; 72.9%) in terms of common errors.Interestingly, most studies reviewed in this paper (e.g.Gorman 2000, Calvert and Gorman 2002, Smaby, Crews and Downing 1999, Fischer 2004) rate methodology as a common cause of frequent errors made by authors, including authors of LISpeer refereed journals.Thus, methodology is an area of serious concern for LISresearch.
Readability or language scored fair [43%] (see Fig. 5 and 8), suggesting that it was adequate.However, we wish to concur with Fischer (2004) and Hernon's (2003) observation that editors edit/screen submitted manuscripts before sending them to reviewers, a factor that directly influences negative scores.Experience has shown that poorly written manuscripts normally get returned to authors for editing or proof reading before being subjected to a peer review process.Alternatively, such manuscripts are rejected.The reviewers' quantitative reports (Figure 5) concur with qualitative reports where readability is placed yd with 40 (47%) as indicated in Table I.
As discussed before, a good literature review is important in (LIS)research.However, depending on the nature of the study, a literature review may not be fully represented in a journal article.For example, while a research report may give only an overview of the literature review, a literature review manuscript would give a detailed review of related studies.Therefore, evaluating a manuscript in this category should be done selectively and not by completely ignoring whether or

Figure 7
Figure 7 Literature Review

Table I A
Content analysisand representation of the reviewers views on authors' errors (N=85) , data collection instruments and analysis inadequately presented and described, research method not articulated, no empirical study Presentation/organization -poor or unnecessary graphics presentations, poor organization, no logical flow, lack of clarity, inappropriate format, inadequate abstract, unclear scope of research, no or inappropriate introduction, poorly structured, length of paper either too long or too short