Journal info (provided by editor)

The editor of Scientometrics has not yet provided information for this page.

Space for journal cover image
Issues per year
n/a
Articles published last year
n/a
Manuscripts received last year
n/a
% accepted last year
n/a
% immediately rejected last year
n/a
Open access status
n/a
Manuscript handling fee?
n/a
Kind of complaint procedure
n/a
Two-year impact factor
n/a
Five-year impact factor
n/a

Aims and scope

The editor has not yet provided this information.

SciRev ratings (provided by authors) (based on 14 reviews)

Duration of manuscript handling phases
Duration first review round 2.1 mnths compare →
Total handling time accepted manuscripts 3.3 mnths compare →
Decision time immediate rejection 27 days compare →
Characteristics of peer review process
Average number of review reports 1.6 compare →
Average number of review rounds 1.7 compare →
Quality of review reports 3.5 compare →
Difficulty of reviewer comments 2.9 compare →
Overall rating manuscript handling 4.1 (range 0-5) compare →

Latest review

Outcome: Rejected (im.).

Motivation:
To justify the manuscript's rejection (without external review) the only comment was as follows: "Comment by the editor: We have again decided against publication in our journal for the following reasons. Firstly, the theoretical embedding of the model into the scientometric context seems to be problematic. The same questions regarding quantification and measurement of the notion of quality (both peer review and journal) can be quantified and measured. Again, we do not see any attempt of giving empirical examples to show concrete implications for quantitative science studies. Finally, the grandiosely introduced free online tool proved a formula for just calculating sensitivity/specificity values based on partially obscure parameters without any other features." We sent an email to the EiC of Scientometrics (without response). In our email, we said that such an editor's comment shows a content-based bias in our research. This is so because the purpose of peer-review is not to ”kill the paper,” but to evaluate it in terms of its strengths and limitations. However, the editor's comment involves partiality against our submission by virtue of the content (i.e., methods and theoretical orientation) of the work. This is the confirmation bias that challenges the impartiality of peer review because an editor should evaluate a submission on the basis of its content and relationship to the literature, independently of their own theoretical/methodological preferences and commitments. Furthermore, this is a disciplinary editor that prefers mainstream research and exhibits bias against our interdisciplinary research that used Bayesian inference and Information Theory to define the optimal value of sensitivity and specificity of the peer-review process. In addition, he or she also has a contemptuous and little considerate treatment of us authors. Please, see the repetitive use of “again”, and the lack of sense of some of their sentences. Especially disrespectful and with a complete lack of rigor and knowledge is their comment “the grandiosely introduced free online tool proved a formula for just calculating sensitivity/specificity values based on partially obscure parameters without any other features." On the other hand, it is especially relevant that their comment (without external review) of 7 lines of text, took them a total of two full months. In my opinion, it is another clear attempt to hinder our work. By definition, mathematics and computer science are quantitative fields and also we published more than 30 papers in the same journal using a similar theoretical orientation. However, in the last year, it seems that this same approach (theoretical and mathematical) is no longer valid for this editor repetitively assigned to our submissions. He or she is clearly an editor opposed to our approach and work, who does not send the works for external review, although showing that he or she is clearly not an expert in his/her comments that lack (in our opinion) the minimum rigor necessary to evaluate the work of scientists (more after publishing around thirty papers in Scientometrics).