Journal info (provided by editor)

% accepted last year
n/a
% immediately rejected last year
n/a
Articles published last year
n/a
Manuscripts received last year
n/a
Open access status
n/a
Manuscript handling fee
n/a

Impact factors (provided by editor)

Two-year impact factor
n/a
Five-year impact factor
n/a

Aims and scope

The editor has not yet provided this information.

Latest review

First review round: 9.0 weeks. Overall rating: 0 (very bad). Outcome: Rejected.

Motivation:
Unfortunately, negative ratings are not possible here. But I feel this Journal deserves a minus one. The quality of peer reviews was by far the worst I have ever experienced in my career (I started writing journal papers in 2010 and I am writing this in 2022). First, the reviewers reports were not helpful at all. They did not contain any criticism on either methodological aspects of the work nor on the reported results. The only message conveyed to us was that the text was difficult to understand as it was too scientific. We submitted the manuscript 'as is' to a more appropriate journal on the same day we got the decision letter. About the reviewers. Reviewer #1 complained on how different the manuscript was from the way they usually write papers. It contained citations "in a lump form". The sentences were not phrased to his liking. There was no overview of the field in general (why should there be one? this was not a review paper). I find it hilarious that they decided to focus on language since clearly it was not a native speaker. The edits they suggested were bogus. Further, Reviewer #1 admitted they were not able to follow the derivation of the main theoretical arguments. Then, they suggested that the difference in the samples considered in this work was not 'sufficiently clear'. Although the paper focused on a modification of a data treatment procedure for MEASUREMENTS, the Reviewer thought it would not be suitable for the Journal because it was too 'theoretical'. Well, it just happens that these measurements involve theory -- we are not able to change that, sorry! And these complains appeared even though three sets of experimental samples were measured and analysed with the said model, plus the experiment was explained in sufficient detail. Then, Reviewer #2 wrote his witty comments, which misinterpreted the whole text and made us look like idiots who are not capable of understanding simple things. We believe that Reviewer #2 misinformed the Editor (that was likely a deliberate move to prevent our publication), making him believe that a Figure contained poor match between the cross-verified methods, although any person with a clear vision would be able to see the match was excellent. Based on that conclusion, Reviewer #2 proceeded to say our models were not sound as they were numerical and depended on grid partitioning, and that we had taken arbitrary parameters to test them. The decision was communicated by an 'Editor' who did not have an academic degree, said to be acting 'on behalf' of the Editor-in-Chief. I strongly advise anyone reading this AGAINST sending to this Journal. It is not even in Q1 and the Editors and the Reviewers are unprofessional -- so don't even bother.