Reactions by historians here indicate that barriers for interdiscplinary work are robust, even may reflect hostile attitudes. Although they seem more rooted in different styles of thinking and communicating, rather than ideological differences.
Reactions by historians here indicate that barriers for interdiscplinary work are robust, even may reflect hostile attitudes. Although they seem more rooted in different styles of thinking and communicating, rather than ideological differences.
Ah, and the paper itself tries a fresh view on a small sequence of well-known events by using an established method to analyse spatio-temporal processes. The new points are modest: estimate of velocity and topology of spreading, and some correlation with some stationary factors. Why the excitation?
Ironically perhaps, I think it's from the history of interaction with the data mining set, where it's seen less as interdisciplinary cooperation than a colonizing effort attempting to take space from the field. I'm in no position to comment on this paper, but the last 25ish years have led this way.
In many fields publications in N are really used to occupy territories and fight for funds. But just in this case, direct competition and the fear of being overpowered by the people with maths & computers & money seem remote dangers. So something other may still play a role.
I had hoped someone comes and explains the epistemic outlook in the field as basically incompatible with the crude network model of events and links and its brutal simplifications which could hardly deliver anything noteworthy.
Instead, mostly came: quantitative historical science exists, but cannot be done like that, there are better data, and without the full context how dare these people to analyse historical data. Fair enough, unless the critique already fails at the simple apprehension, what is in the paper itself.
It's useful to note the reactions were more to the secondhand reporting of the paper, where the framing of 'solves a problem historians couldn't' comes into play, the irritation being as much that it ignores that the data was collected by historians & the tested hypothesis was theirs as well
The reaction is a more generalized one to the state of higher education, funding, attitudes towards history & the humanities, overstepping of boundaries & inadequate reporting than it is about this paper. There are some real & serious concerns there which color all cross discipline interactions.
Yeah, that sounds reasonable. If one sees all this horrible reporting on science in popular media where everything is mysteries, unsolved riddles, gripping sensations, framing science as spectacle, in that atmosphere it is hard to justify research with main activity careful reading & writing.
It's not interdisciplinary if it doesn't engage the other discipline. Studying the past doesn't equal engaging the field of history, which has ongoing discussions about method including quantitative methods.
As a very interdisciplinary scientist myself I am v cautious about people defending turf. But then I read how “..The study focuses on “something that we can measure”, he notes. “So, it’s science. It’s not only speculation…” and that is madness. Arrogance, ignorance and condescension all in one
It is really interesting, as I perceive such statements completely differently. He explains how focus on "something that we can measure" offers possibilities for a quantitative modelling - and that this new application of such well-known methods offers some new answers. Seems fair enough.
One would need to harbour an almost bizarre veneration or fear of 'science' - meaning here some rather crude quantitative approaches from statistical methods - to construe such statements as arrogance etc.
I am not a historian, I am STEM by training, and it deginitely sounded as arrogance. "Unlike your field, which is speculation, I bring science!"
Very messy data collection…. While acting this was not studied…
The data collection used in the paper is a classic. So, one has to carry the critique to G.Lefebvre who did it.
the ai tooling ? or the human checking ?
Well that is not even auxilliary science - only clerical job for some grad student. If done well enough, the electronic version should reproduce the collection of G.L. almost 1:1.
So now we both agree that the person claiming this is an unstudied topic is why they are getting lots of push back, no ? This isn’t some new novel thing and they didn’t find anything new hence people being upset
Agree? I think it is rather impossible to cite whole monographs about a topic and create a database from one for some analysis and then claim, it was an unstudied topic. One can believe whatever, chacun à son gré.
Umm so now you are on the side of the people who are calling the paper out
The issue is not calling the used method science. The issue is the “..not only speculation…”, which is as good as saying, “what other people were doing has no epistemic value” That wasn’t the intended meaning? Well, if you don’t want to say that, don’t say it
I do not think that the statement was meant as disqualification. Working with data or documents, always insufficient, the greatest danger is fooling oneself about the merit of an idea, how to interprete them. Quantitative evaluation then mostly helps to convince oneself and to move past speculation.
From the refereeing process, it seems that 1st submission was even simpler: an attempt to employ the metaphor of rumours spread like contagion re: a historical episode. Only in revsion, the authors use different abstract models of epidemiology and favour one.
The stuff about SIRI, emotional contagion etc. These metaphors may give very misleading ideas about the social interactions & dynamics at play. It is rather here that some major hermeneutic effort would be interesting to understand what the selection of models implies.
From the tentative formulation by the authors that there was likely more political rationality in the actions during the Great Fear based on thei modelling effort, I tend to believe they would welcome other approaches to a description of 1789.
In any case, in the warning against speculation, I rather hear the practitioner who has worked with deficient data and documentation and knows how dangerous preconceived ideas are, while no idea or speculation doesn't work either.
I'm an economist not an epidemiologist, but if economists said that it would 99% surely be an unsubtle critique/dismissal of previous research - especially if it was from a 'soft' discipline (intentional scare quotes, Ive taken some grad history and anthro, that pejorative framing is usually wrong)
This comes back to style of communication. In the so-called 'hard' sciences the idea of progress is pervasive. Doing work on some old problem or on an established topic, any new contribution will imply a critique of previous work. Introductions of a new study will often contain formulaic statements.
Like - there is this idea, conjecture, opinion, but sofar data, evidence, models have been insufficient, methods inadequate, wrong assumptions, proofs had gaps ... Followed by, here, we have addressed this by new experiments, calculations, changed perspective in our theory, used better models etc.
The model for this type of account on state of the art possibly comes from maths - conjectures, then proofs by some effort, may be partial first, then generalization etc. In most scientific work, real proofs do not exist, but if people are closer to math, such statements will sound stronger.
Previous work will simply be described as wrong even, and if authors believe their work was a significant new contribution, then they will write that it settled a question, ornamented with self-praise, how they achieved that progress.
What can I say? It read very clearly to me as dismissive of other approaches Maybe it wasn’t meant that way indeed. But the likes on my post suggest my reading is a common one
...because everyone knows that numbers, no matter how they were acquired, trump every other form of evidence.
I think there’s a logical issue here. While you cannot scientifically investigate anything without measurement, that does not entail the conclusion that anything that you measure must be scientifically grounded. “How do I love thee? Let me count the ways…”
Scientific ínquiry without measurements exists, think all sciences dealing with the past, like paleo-something chose your favourite, and measurements on the other hand, exist outside of sciences, too. What has this to do with an exercise in data analysis of some documented events ?
I'm curious which "palaeo-something" doesn't include measurement?
Historically, something based on morphology was not using proper quantification, but observation only. By now, admittedly this also changed with the possibilities of image processing and quantification.
Actually morphology has long included taking measurements using equipment like calipers.
The logical entailment part…
The logical entailment from measurement here attempts to gain a consistent picture about specific events in space & time in 1789, rumours & outbreak of violence. The science is in developing a method to back this with existing data & quantification. As such it appears as valid research to me.
You absolutely can “scientifically investigate” things without measurement! A lot of History and Botany, to pick two examples Description is part of science, and it precedes measurement
Fair. My point was that measurement does not imply science.
I regret to inform you that historians have been used data analysis to gain insights into the past for so long that in the 90s the technique was already being discussed in historiography classes Historiography is like meta-history. What can we learn about a period from their approach to history?
So? as far I can make - out the paper just applies a specific form of network analysis for a set of events, which are amenable to it - rumour-spreading, and even that is not new. It doesn't is say anything about data analysis - absent or not - in historical disciplines.
A valid and interesting critique might start with some statement/idea how the tool used and the model cannot deliver any worthwhile insights on the events 1789, because <...> . And there is a remarkable gap here. And this gap is not a issue of data lacking or suppression.
They could have involved a historian who specialises in the period, who could have found them better sources and more recent scholarship. They didn't value history enough to do that, which is telling And you think the only issue we could have is with the model they used🙄
That is the view of historians, but for a first attempt of using a data analysis tool (or a new computational scheme etc.), most scientists will prefer some classical well-established example, and not try to be up-to-date. If the application delivers something interesting, all the better.
Admittedly, this is a certain laziness, but up-to-date data or descriptions / records are harder to come by, require more work, may be more contested, will be available only after convincing someone from other fields etc. Too much, for illustrating a certain idea about a tool new in another field.
No they're not if they take one (1) moment to ask one (1) subject matter expert
And there's that contempt all over again There is so much wrong here that I am not sure I can be bothered to break it down
They did a stats project and want people to say it's groundbreaking It's just not. It could have been mildly interesting, but the contempt for the subject on display devalues it from even that level
At least they did not say it is earth-shattering. This now seems the yardstick in the N-journal. Policy in scientific publishing really is another subject - and it appears misguided to criticize authors for a measure of propaganda, when they offer interesting stuff, mildly or otherwise, there.
Stop moving the goalposts and listen to the people taking the time to talk to you
It's impossible to study eg the English Civil War without it Some of it is about sources and framing, and which voices get ignored/prioritized, some of it is about techniques Historians know they have blind spots, because part of the discipline is to study the blind spots of the past
That article is basically the history version of claiming to have disrupted the taxi industry by inventing the bus
Domesday Book ☹️ Domesday Book IN AN EXCEL FILE 😃
It's absolutely bad science to simply decide that because a subject is hard to measure you will isolate something that can be measured and then just assume that accurately reflects a real thing.
This is not what the authors say and do - the isolation of something measurable actually is a vital part of research. After the exerecise, if successful, one has measured the thing. E.g., here the velocity of the spreading of relatively well circumscribed single, but interrelated events, not more.
A more panoramic view of the year 1789 then should comply at least with this piece of information. The use of models and quantification seems more an attempt to avoid fooling oneself about issues that one expects in the data, but cannot grasp by a merely descriptive strategy.
Thats not even want was measured. What Tawas measured was the preservation of specific info. That's the point. They are assuming that the thing they can measure reflects a specific reality that any historian could tell you it doesn't. This is literal "dark ages means no science because no sources"
Level extrapolation. You cannot derive something the data doesn't exist for just because you can find some data and assume it reflects a thing it doesnt.
Isolation of something measurable is not a vital part of research! To be a bit more precise: measurement is certainly desirable where it can be achieved, but it is neither necessary nor sufficient for research
Well, it is one way to gain some idea about past events, or regularities in phenomena. There are others, for sure - but as an approach as part of research, why not ?
Absolutely, do it, it’s good! New methods can bring better understanding, I’m all in favour The issue is (1) pairing measurement with science, and the. (2) contrasting that approach with “just speculation” That’s where the strong tone of arrogance and condescension comes in
Where they say the data & modelling was not sufficient to fix the topology (or spatial spread) of separate clusters, which may have originated independently, but may have also experienced an earlier wave of contagion, they refer to a historical monograph providing similar conclusions, just as such.
Ror the record, I do not know any of the authors (I have read some papers by Zapperi, and he may be cited somewhere in my stuff). As they concern themselves with a historical phenomenon and use data collected by a historian, I would think it unlikely that their drive is disdain for historiography.
Have you assumed I was calling the authors arrogant? Nowhere have I said that. On the contrary, I assume the authors have a good attitude I responded specifically to commentary in the story you linked to. I quoted directly the comments I find objectionable
Exactly.