Incarcerating Conrad Black Would Vastly Exceed His Crime

[In hindsight, from the age of Trump, a decade later, it’s clear that the likes of Black and Trump deserve everything that’s coming to them. Trump’s villainy and vacuity is obvious, but Black’s more genteel and glib criminality is just as foul.]

Review of: “Robber Baron: Lord Black of Crossharbour” by George Tombs (ECW Press, 2007)

First, it cannot escape the reader’s attention that in wooing his confidence to elicit information for his book, the author, George Tombs (in a far milder way) used wiles not unreminiscent of the ones that Conrad Black used to enhance his fortune — but it is Conrad Black who now faces prison.

Fair enough. Black did it on an incomparably grander scale. So did just about everyone else in Tombs’s book, as far as I can tell — Black’s partners and competitors, his supporters and detractors, his defense lawyers and the prosecution, to greater and lesser degrees. Humans are a deceitful and manipulative lot, to greater and lesser degrees, and when that degree passes a certain threshold, they need to be restrained or punished.

And Black undeniably passed that threshold. The only question is whether the proposed punishment is commensurate with his crime, and to this reader it is absolutely clear that it is not.

Conrad Black is a hero-worshipper (and his heroes — numbering Duplessis, Nixon and Napoleon alongside Roosevelt and Churchill — are not all admirable); he is a money-maker (skillful, lucky, but not especially creative, if one can be said to be creative at all in the direct quest for money, rather than the quest of something else, with money only a byproduct); he is an ostentatious spend-thrift on a grand scale (not a noble or admirable trait, when so many are so poor, but not in itself a crime); he is considerably more intelligent, learned and cultivated than average (and what intelligence he does not dedicate to money-making, he devotes to reading, writing and publishing on history and politics, likewise on a scale far above the average); his neo-conservative political views are not noble or admirable either (except to other neo-conservatives and money-makers), and, as always in such cases, they depend on a considerable degree of self-deception, alongside the usual quota of deceipt and manipulation.

Conrad Black’s business ventures (consisting mostly of buying up newspapers and making them profitable by firing staff and tightening their efficiency) increased their revenues by billions, and he appropriated hundreds of millions for himself in the bargain, through both legitimate and inflated fees and inflated payments from “non-compete” agreements. His crime was pocketing those extra payments rather than passing most of them on to the share-holders of the private company that he had subsequently made into a public one (though I’ll wager that he generated an order of magnitude more revenue for others — “wealth creation” — than the inflated fraction of it that he withheld illicitly for himself). He also wrote three thoughtful biographies (Duplessis, Roosevelt, Nixon), an autobiography, and a large number of newspaper articles, all in much the same neo-conservative vein. He consorted (and loved to consort) with the rich, titled, famous, and influential.

That’s about the size of it. And the question is whether for that he deserves to spend years in a medium-security prison, alongside murderers, violent drug-dealers and mafiosi by way of punishment.

If we set aside our (justified) resentment at the unrestrained and remorseless pursuit of wealth and power (and Black’s ideological celebration of that very pursuit), our glee at seeing a high-roller caught in the act of pilfering, our distaste (not untinged with envy) at the self-assured arrogance with which Black operated until his fall, and our (self-righteous) applause when corporate criminality is caught, exposed and punished — can we really say that Black deserves anything worse for his crimes than to lose his fortune and prestige and to spend the rest of his days repaying his debts?

This urbane, knowledgeable, eloquent (if long-winded and hyperbolical) man was indeed caught in corporate crime that is on no account to be pardoned or permitted. But he was not violent, not psychopathic, and not (this must be stressed) purely venal either, being also an impassioned and resourceful advocate of a political position that I personally find revolting, but worthy of discussion and analysis, if only so that its iniquities can be exposed and rebutted.

I do not think anyone’s interests are served, nor any useful example is made, by incarcerating such a man with violent criminals or even common Enron rogues. His assets should be seized to pay back his debts, but he should be allowed to write his memoirs in peace, exiled to his economic Elba, not the US penal system.

To punish Black any more than that would be to show exactly the same lack of understanding and empathy for those less fortunate than ourselves that Black himself showed in his single-minded pursuit of fame, power and fortune, and his benighted championship of that pursuit as the meaning of life.

There is no example to be set here, to warn off similar corporate malfeasance by others; there will never be another like Conrad Black, not even close.

(Tombs’s book is fairly well-written, but quite repetitious, not always well-integrated, and with a number of undetected typos.)

The Age of Opinocracy and Hypocracy

Professor Colquhoun’s Guardian article on our age of “endarkenment” is perhaps a bit exaggerated, but it’s basically right. Educational and research institutions are being turned into corporate bottom-line-feeders and democracy itself is morphing into opinocracy and hypocracy (shaped by hearsay and well-funded media manipulators). Promoting “metrics” in research risks feeding into this, and that is troubling. Metrics are, of course, fallible, misinterpretable, and manipulable. For me, they are not an end in themselves, but a possible means of making research freely accessible to all. Right now the media are dominated by trash and quackery. Injecting all of real research into that open database might — just might — mitigate that a bit. Openness might even help constrain metrics (monitoring, exposing and shaming abuses). I doubt it can make things worse. (Trouble is that although openness may be asymptotically self-corrective in the long run, technology, including the media, are beginning to make it possible to do such monstrous acute mischief in the short run that the long run might come too late. Let’s hope this is merely a hysterical hypothetical…)

Grounding the Arbitrary in the Non-Arbitrary

STEVAN HARNAD

Commentary on:
The poet who could smell vowels
Times Literary Supplement November 14 2007
(article on Ferdinand de Saussure by John E Joseph)

Please don’t be frightened off by the symbols; they are made fearsome for a purpose: Suppose we have a hundred things. These can all be physical objects, or words, or speech sounds. Now suppose we sort them into (say) two categories, A and non-A, based on three, two-valued (+/-) properties, X, Y, Z. The properties could be natural ones (+/-solid, +/-edible, +/-pronounceable) or social (+/-kosher, +/-english, +/-posh). Each thing can be described by its value on the three properties (e.g., +-+). Let’s say that to be in category A you need to have a + on property X, otherwise you are in category non-A.

Now I hope that this exercise has left you a little lost in a bunch of meaningless formal symbols. So even if you followed well enough to be able to tell me that a thing that was -++ would be a non-A, you would still have little idea of what the things, or the properties or the categories were. This would be true even if you spent years categorizing examples I fed you, in the form of “Would a -+- be an A or a non-A?”

This is an example of the arbitrariness of symbols (which is what A/non-A, X/Y/Z and +/- are here). Words too are symbols. Whether a mushroom is edible or not is not a symbol, but its name “A” and the names of its properties are. Saussure is best known for stressing the arbitrariness of symbols, but apparently that was already well known from Scottish sources before his time. Saussure also had synesthesia, which means, for example, that for him vowels had a smell, and this helped him see (or feel or taste) associations between words and objects that most of us do not see. He perhaps thought that such associations somehow provided a bridge between the arbitrary shape of symbols and the natural shape of the things that symbols signify.

But Saussure’s main contribution, which he derived from his English lineage (via Mill from Hamilton) was the view that (what we would today call) cognition is “differential”: it is somehow based upon encoding differences in terms of the kinds of +/- properties illustrated above. This led to structuralism. We don’t see things as absolutes. We see them in terms of a network of formal contrasts. An A is an A because it is +X. The “representation” of a thing then becomes the set of +/- values on its properties.

This is all fine as far as it goes, but there is a problem: Just as my behaviour could very well be described as categorizing when I used the rule “All and only A’s are +X” to reply to questions like “Would a -+- be an A or a non-A?” I could do that task till doomsday without ever knowing what an A or an X was, and with no way to recognize one if I saw it. This is called the “symbol grounding problem.” Today, cognitive science tends more toward computationalism than structuralism, but both approaches are insufficient to explain cognition, and for much the same reason: Because arbitrary symbols — whether part of a structural diagram, or a computational algorithm, or, for that matter, an English sentence, are merely (as the philosopher John Searle calls them) “squiggles and squoggles.” Their connections with the things they signify are parasitic on the meanings in our heads, and what we have in our heads is definitely not just more squiggles and squoggles.

To ground symbols, to put concrete flesh on their arbitrary bones, be they ever so systematically structured, the symbol system first has to have the direct sensorimotor capacity to categorize the physical objects that its symbols signify — not merely after something has magically reduced them to a symbolic description. And the “shape” of sensorimotor capacity (like the shape of objects themselves) is not symbolic or arbitrary: it is analog and dynamic. This is not synesthesis, but esthesis, and it requires a mechanism for learning and identifying categories that a symbol system alone will always lack.

Expertise and “Elitism”

Anonymous correspondent:Peer review is elitist and oligarchic; adding a web-based post-hoc system would be democratic. It is often non-specialists, or Pro-Ams, who expose quackery.

Aristotle, by the way, taught a monarch, studied under Plato (who advocated oligarchies), and then lived in the Athenian democracy. He concluded that the democratic method was the most effective.

See Aristotle on Smart Mobs!

On peer review: I agree completely that adding a web-based post-hoc system would not only be democratic but a dramatic, powerful new safeguard on validity. Don’t forget that post-hoc commentary is and has been my bandwagon all along: it’s what motivated “scholarly skywriting” and drew me into Open Access!

But the critical point is that it is post-hoc. It is not a competitor to peer review but a complement (“not a substitute but a supplement”). It is when people propose post-hoc commentary as a substitute for (rather than just a supplement to) the advance correction and filtration by answerable, qualified experts provided by peer review that I (appear to) go into opposition to the very thing I am fighting for — post-hoc skywriting — (but that is a misunderstanding).

Nothing is lost, and everything is gained, in putting a global, open commentary system at the tail end of expert-vetted work. But when it comes to medical treatment for my loved ones, I don’t want their medicine to be administered on the basis of a net-based straw poll or free-for-all alone. You see how rumour and ignorance and superficiality also propagate on the Web. A prior phase of closed, answerable vetting by qualified specialists is essential, otherwise we may as well treat patients on the basis of the latest in wikipedia. (And science and scholarship are surely not that much less important than health!) Entrusting all that to populist polling and vigilantism is a form of gaussian roulette.

On expertise: To put it another way: I really think we need to re-think, or think through, exactly what we mean by “elitist” and “oligarchic” in this sense: Is it elitist to have certified cardiologists decide what should be published as being a safe healthy operation to perform, rather than having it voted on by a Gallup Poll or swayed by persuasive blogsters? Is it oligarchical (to put it even more luridly) to keep hobbyists out of the operating theatre?

On specialised division of labour: I pick these melodramatic examples only to bring out the fact that there really is something at stake, and that it’s commonsensical: We cannot, in the modern world (of the past tens of thousands of years of civilization!) each be self-sufficient jacks of all trades. We rely on division of labor, and division of expertise, for everything from our food and shelter to our health and security. Our cumulative, collective knowledge and expertise (our “Creative Commons”) is also dependent on this distributed, complementary expertise.

So I ask: is that division into complementary expertise “elitist”? Is reliance on it “oligarchical”? Should we (like our failing education system) declare everyone equally expert as a matter of birthright, and cede judgment to democratic opinion polls in all matters instead?

On re-thinking “elitism”: Without for a moment denying that qualified expert judgment is fallible too (but recognizing also that mass inexpert judgment is no remedy for that, just a useful check/balance), I really think the rhetorical buzzword “elitism” needs a serious rethink — especially where it is in fact referring to specialized expertise, skill, knowledge that a minority have, and have worked hard to attain, whereas the majority have not…

In a sense, representative democracy involves something like this division of labour too: It is only that the tyranny of the daily opinion poll is sadly constraining the work of our elected representatives. Democracy, too, used to be “post hoc”: We would vote for those we considered to be provisionally best qualified to represent our interests (alas not always noble interests, but that’s another matter) and then let them do their job, until it’s time to vote again on whether they deserve re-election.

On mass micro-management: But now, with pervasive media and instantaneous polling, we hardly let them exercise any expertise they may have, or may have acquired on the job (for we hardly vote for experts either, preferring ignoramuses like ourselves, as more flattering and congenial!): we look over their shoulders daily, not only at their sex lives and their expense accounts, but at their daily professional judgments. We insist on a day-to-day participatory democracy, and we get what we insist on: a fashion show of trends and opinions: capital punishment ok today, not ok tomorrow, abortion yes, no, veils in schools, on, off, etc. etc.

Since many of these are matters of opinion anyway, perhaps it doesn’t matter if it is the winds of fashion or rumor rather than the wisdom of trusted electees that decides. But sometimes it does matter. And the catastrophic “popular” (at the time!) Iraqi war is one such result. The rest seems now to be putting out daily forest fires (still on the basis of day-to-day public opinion).

We not only blindly trust “populism” (and condemn “elitism”) — thereby ceding judgment to the vagaries of the “normal distribution” (“bell curve”), which, at best, guarantees regression on the mean; but at worst, or meanest, it is the occasional but inevitable burst of noise, or worse, closer to the tail end [the extreme] of the distribution, which sometimes manages to appeal to the mean, and become mainstream.

On more sinister background forces we also trust: Today we are also blindly trusting another unquestioned “force,” rather akin to the ponderous inertial mass of populism, and that is the inexorable march of capitalism: global military-industrial interests being inexorably — indeed psychopathically, as the excellent movie The Corporation, showed — pursued in the background (the Cheneys behind the Bushes).

It is as unfashionable today to be anti-capitalist (in anything) as it is to be anti-populist (in anything). It is axiomatic that what is good for the market is good, and good for everybody, and what is judged good by the majority is good, and good for everybody.

On the wisdom of time: I beg to differ; and to be allowed the time to show that what appears momentarily to be right either to prevailing public opinion or to the corporate bottom-line, may not be right at all. It’s all a matter of time, after all. What allegedly sets our species apart is our capability of deferring gratification and deferring judgment, even deferring to the judgment of those who may be better qualified to judge. Yes, let’s have post-hoc controls on all that, but let them be “post” enough to give experts the chance to do what they are best qualified to do…

On Aristotle on collective wisdom: As to Aristotle on the latent collective (“whole is better than the sum of its parts”) wisdom, morality and ethics of mobs — that might be true of the audience of Hellenic Theatre, it might occasionally coalesce in the conscientiousness of juries (though one thinks of OJ Simpson) , but it hardly seems true when it comes to lynch mobs, Danish cartoon hysteria, or American voting patterns! Nor, for that matter, the mean value on which pop music has regressed, with the extinction of connoisseur elites (there, you can pillory me on that one!).

Aristotle may feel at ease facing a crowd, and deferring to its judgment. I am terrified; utterly terrified. Only demagogues can “reason” with crowds, particularly online, in real time.

I cannot share the feeling of many that the accolade “most popular” is synonymous (or even remotely related) to “best quality.” My own default reaction (sometimes wrong, I admit, but born of experience, not native conviction!) is the exact opposite: Most popular? Then chances are it is rather superficial, uninformed, and trashy…

D.O. and the D.R.B.

It is fashionable today for those who need not worry about protecting anyone from anything (except fair, careful reflection) to moralize and sensationalize, idly and mischievously, about torture. Is it ever justified? Would our side ever do it?

Fair enough. The price of liberty is eternal vigilance, and torture is no idle matter. But consider where this eager Schadenfreude can lead, if given its head in hysterical times, when rumor and innuendo carry far, far more weight than sober analysis and answerability. Just as the terrorist need only succeed once, whereas his intended victims remain eternally vulnerable, so smears need only besmirch once, and thenceforward all the burden is ever on the victim to try free himself from the foul spot; all the better if the victim is already deceased and interred.

D.O. Hebb was the greatest research psychologist of the 20th century. (I say “research” to distinguish him from the armchair/couch kind of psychologist with which his work had about as much affinity as with a geologist’s or a gardener’s.) Hebb’s contributions spanned the full spectrum of human (and animal) experience, from behavior to brain function, from childhood to old age, from biological nature to cultural environment, from sensory deprivation to sensory enrichment — and the overarching theme of his life’s work was how experience affects the brain.

Now we are told that Hebb’s secret research taught the CIA how to torture at Abu Graibh. First, there was nothing secret about Hebb’s research. Generations of undergraduates have learned how his experiments discovered the disastrous effects of sensory deprivation (as well as the remarkable benefits of sensory enrichment). He had been investigating those factors long before the Canadian Defense Research Board (DRB) funded a portion of his research, and it is undoubtedly the case that they funded his research because of its possible interest to the DRB rather than that he did the research because the DRB was interested in it. That is transparent, because the implications of Hebb’s research for the DRB are a one-liner — sensory deprivation has disastrous effects, hence it’s a good potential form of torture — whereas their implications for Hebb’s life work on how experience affects the brain (positively and negatively) constitute the foundations of modern cognitive neuroscience.

That sensory deprivation is a good potential form of torture was what drew the DRB to Hebb’s work in the first place, and they did not learn anything from funding it that they would not have learned if someone else had funded it, and they had merely read it when it was published in a journal (as the DRB tried, unsuccessfully, to prevent him from doing). We must not forget that the the military, with its deep pockets, has funded an awful lot of research, a lot of it awfully trivial, and some (like the psychic research they funded to get people to divert nuclear missiles by telepathy), frankly absurd. Researchers, with far shallower pockets, must alas take their research funding where they get it — but that does not mean taking their research where their funder wants it to go.

D.O. Hebb was a great scientist, with a grand vision, who left a lasting legacy in our understanding of how behavior is organized in our brains; the DRB was and is thinking at about the scale and depth of the journalists who are now seeing in these banal and empty facts about some of the sources of his research funding the germs of a sinister conspiratorial theory of how Hebb’s work is behind the abuses pictured in those lurid hooded photos we’ve been seeing in the papers. Perhaps we should look more closely at the funding history of Faraday too, to see whether we can attribute some of the other abominations at Guantanomo to the father of electricity.

Entitlement

In infants, the sense of entitlement is no doubt a healthy instinct. The illusion that our parents exist only to minister to our needs and wants is adaptive; it makes our childhoods feel secure. But we are best weaned of it, sooner or later, because, if it is allowed to generalize to the sense that the world’s raison d’être is our welfare, it becomes self-contradictory, an evolutionarily unstable strategy, breeding generations that expect only to take, with no one left with the inclination to give — except perhaps to their own children.

Your child is entitled to protection from such a rude awakening too.

On Janet Malcolm on Shipley & Schwalbe on Email in the New York Review: The Power of Skywriting

On: Janet Malcolm “Pandora’s Click,” a review of Shipley & Schwalbe’s The Essential Guide to Email for Office and Home by David Shipley and Will Schwalbe

The Power of Skywriting

What makes email into a potential nuclear weapon (and, like nuclear power, usable for either melioration or mischief) is its “skywriting” potential: the fact that multiple copies can easily, and almost instantly, proliferate, intentionally or unintentionally, to targets, intended and unintended, all over the planet. Paper letter-writing (indeed all writing) already had much the same possibility for haste, thoughtlessness, solecism and misinterpretation, and it too was deprived of the emotional, interpersonal cues of the oral tradition of real-time, “live,” interactive speech. But it was when writing took to the skies with email and the web that it came into its own. Hearsay, even when augmented by video and telecommunications, never quite attained the destructive (and constructive) power of skywriting. It’s all a matter of timing, scope and scale. Verba volunt, scripta manent.

Harnad, S. (2003) Back to the Oral Tradition Through Skywriting at the Speed of Thought. Interdisciplines. In: Salaün, Jean-Michel & Vendendorpe, Christian (eds.). Le défi de la publication sur le web: hyperlectures, cybertextes et méta-éditions. Presses de l’enssib.

Stigmata: Real and Virtual

A late-comer’s appreciation of John Huston’s 1952 Moulin Rouge, based on Pierre La Mure’s novel about Henri de Toulouse-Lautrec:

Although the French wikipedia states that HT-L’s contemporaries said he was not bitter or inhibited because of his hereditary deformity (dwarfism and disfigurement, exacerbated by a childhood accident; his parents were first cousins), but rather an ebullient bon vivant, and even something of an exhibitionist, the novel and movie portray him as deeply wounded and stigmatized by his condition, hypersensitive about it, yet prone to make cruelly ironic, self-deprecating allusions to it in his communication and interactions with others.

The idea is that HT-L, who would naturally have been a horseman, athlete, dancer and lady’s man, instead withdraws into painting and a perceptive but passive observation of life, certain that he is repulsive, especially to women, as a man (and the film has ample actual confirmations of this conviction, with people finding him repulsive and saying so).

HT-L falls in love with a prostitute who had sought his help, and he dares to get into a physical relationship with her only because he perceives that in her profession there is indifference to his condition. But she is indeed a prostitute, and it is never quite clear whether she is really just as repulsed by him as anyone else, or perhaps less so because she too feels a stigma. At any rate, she, ex officio, “betrays” him and his only carnal relationship (according to the movie — in real life HT-L had many prostitutes and mistresses too) ends, leaving him overwhelmed by despair and drink. But again his art, and his sardonic view of life draw him back from despair, if not from drink. He continues to frequent the Paris demi-monde and to paint it affectionately, unjudgmentally. He interacts with its denizens the same way — sympathetic, but unengaged. The implication is that the conviction has now been definitively confirmed that he cannot be loved physically, and that he will never again expose himself to the added torment of inspiring disgust by seeking love.

His sense of being repulsive overflows only occasionally to his work or his words. It is mostly his body’s inability to inspire anything other than disgust that prevents him from daring to hope or to respond when another woman, far better born than the prostitute, and deeply responsive to both his art and his character, may or may not have fallen in love with him. She may love him, or she may just identify with him in a deeper way than the prostitute did; but she seeks a sign whether he will ever be able to allow himself to reciprocate or even acknowledge her feelings, and he is unable to allow himself to dare to show her — and perhaps even himself — that he loves her (although he does, having secretly followed her, jealously, exactly as he had the prostitute). So she — not a prostitute, but, like a prostitute, needing a provider — accepts to marry someone she does not love. As with the prostitute, his last-minute impulse to call her back comes too late.

What is most universal about this film is that the sense of stigma that generates such a sense of being incapable of being loved, especially carnally, is not reserved for the physically disfigured. Or perhaps “appearance” is subtler than just bodily form.


Note added Jan 24 2010: Since seeing Offenbach’s Comptes de Hoffmann, both Hoffmann and Kleinzach, come to mind — but perhaps these were all late 19th-century bohemian/Parisian clichés…

Ethics of Biomedical Open Access to Biomedical Research: Just a Special Case of the Ethics of Open Access to Research


SUMMARY: The ethical case for Open Access (OA) (free online access) to research findings is especially salient when it is public health that is being compromised by needless access restrictions. But the ethical imperative for OA is far more general: It applies to all scientific and scholarly research findings published in peer-reviewed journals. And peer-to-peer access is far more important than direct public access. Most research is funded to be conducted and published, by researchers, in order to be taken up, used, and built upon in further research and applications, by researchers, for the benefit of the public that funded it, not in order to generate revenue for the peer-reviewed journal publishing industry — nor even because there is a burning public desire to read (much of) it.


(1) All peer-reviewed research articles are written for the purpose of being accessed, used, applied and built upon by all their potential users, everywhere, not in order to generate royalty income for their author (or their publisher). (This is not true of writing in general, e.g., newspaper and magazine articles by journalists, or books. It is only true, without exception, of peer-reviewed research journal articles, and it is true in all disciplines, without exception.)

(2) Research productivity and progress, and hence researchers’ careers, salary, research funding, reputation, and prizes all depend on the usage and application of their research findings (“research impact”). This is enshrined in the academic mandate to “publish or perish,” and in the reward system of academic research.

(3) The reason the academic reward system is set up that way is that that is also how research institutions and research funders benefit from the research input they produce and fund: by maximizing its usage and impact. That is also how the cumulative research cycle itself progresses and grows, along with the benefits it provides for society, the public that funds it: In order to be used, applied, and built upon, research needs to be accessible to all its potential users (and not only to those that can afford access to the journals in which the research happens to be published.).

(4) Open Access (OA) — free online access — has been demonstrated to increase research usage and impact by 25%-250% or more. This “OA Advantage” has been found in all fields: natural sciences, biomedical sciences, engineering, social sciences, and humanities.

(5) Hence it is true, without exception, in all fields, that the potential research benefit is there, if only the research is made OA.

(6) OA has only become possible since the onset of the online era.

(7) Research can be made OA in two ways:

— (7a) Research can be made “Gold OA” by publishing it in an OA journal that makes it free online (with some OA journals, but not all, covering their costs by charging the author-institution for publishing it rather than by charging the user-institution for accessing it; many Gold OA journals today still continue to cover their costs via subscriptions to the paper edition).

— (7b) Or research can be made “Green OA” by publishing it in a conventional, non-OA journal, but also self-archiving it in the author’s Institutional Repository, free for all.

(8) Despite its benefits to research, researchers, their institutions, their funders, the R&D industry, and the tax-paying public that funds the research, only about 15% of researchers are spontaneously self-archiving their research today (Green OA). (A somewhat lower percentage is publishing in Gold OA journals, deterred in part by the cost.)

(9) Only Green OA is entirely within the hands of the research community. Researchers’ funders and institutions cannot (hence should not) mandate Gold OA; but they can mandate Green OA, as a natural extension of their “publish or perish” mandate, to maximize research usage and impact in the online era. Institutions and funders are now actually beginning to adopt Green OA mandates especially in the UK, and also in Europe and Australia; the US is only beginning to propose Green OA mandates.

(10) Some publishers are lobbying against Green OA self-archiving mandates, claiming it will destroy peer review and publishing. All existing evidence is contrary to this. (In the few fields where Green OA already reached 100% some years ago, the journals are still not being canceled.) Moreover, it is quite clear that even if and when 100% Green OA should ever lead to unsustainable subscription cancellations, journals will simply convert to Gold OA and institutions will then cover their own outgoing Gold OA publishing costs by redirecting their own windfall subscription cancellation savings on incoming journal articles to cover instead the Gold OA publishing costs for their own outgoing journal article output. The net cost will also be much lower, as it will only need to pay for peer review and its certification by the journal-name, as the distributed network of OA Institutional Repositories will be the online access-providers and archivers (and the paper edition will be obsolete).

(11) One of the ways the OA movement is countering the lobbying of publishers against Green OA mandates is by forming the “Alliance for Taxpayer Access.” This lobbying group is focusing mainly on biomedicine, and the potential health benefits of tax-payer access to biomedical research. This is definitely a valid ethical and practical rationale for OA, but it is definitely not the sole rationale, nor the primary one.

(12) The primary, fundamental and universal rationale for OA and OA mandates, in all disciplines, including biomedicine, is researcher-to-researcher access, not public access (nor even educational access). The vast majority of peer-reviewed research in all disciplines is not of direct interest to the lay public (nor even to students, other than graduate students, who are already researchers). And even in biomedical research, what provides the greatest public benefit is the potential research progress (leading to eventual applications that benefit the public) that arises from maximizing researcher-to-researcher access. Direct public access of course comes with the OA territory. But it is not the sole or primary ethical justification for OA, even in biomedical research.

(13) The general ethical rationale and justification for OA is that research is funded, conducted and published in order to be used and applied, not in order to generate revenue for the journal publishing industry. In the paper era, the only way to achieve the former was by allowing access to be restricted to those researchers whose institutions could afford to subscribe to the paper edition. That was the only way the true and sizable costs of peer-reviewed research publishing could be covered at all, then.

(14) But in the online era this is no longer true. Hence it is time for the institutions and funders who employ the researchers and fund the research to mandate that the resulting journal articles be made (Green) OA, to the benefit of the entire research community, the vast R&D industry, and the tax-paying public. (This may or may not eventually lead to a transition to Gold OA.)

(15) It is unethical for the publishing tail to be allowed to continue to wag the research dog. The dysfunctionality of the status quo is especially apparent when it is public health that is being compromised by needless access restrictions, but the situation is much the same for all scientific and technological research, and for scholarship too, inasmuch as we see and fund scholarly research as a public good, and not a subsidy to the peer-reviewed journal industry.

Stevan Harnad
American Scientist Open Access Forum

On living, reading, writing and acrasia

Proust describes Swann as lazy in his scholarly work, as one who is more interested in life itself than in reading or writing. Swann found an excuse for his laziness in “the idea that ‘Life’ contains situations more interesting and more romantic than all the romances ever written.” Swann had “acquired the habit of finding life interesting–of marveling at the strange discoveries that there were to be made in it.” (SW, Swann In Love)

Proust often reproached himself for not sitting down at his desk and working. Proust probably used the same excuse for his laziness that Swann used–life is more interesting than books. Proust wasn’t the only writer to believe that life is more interesting than books; “from life”, said Kafka, “one can extract comparatively so many books, but from books so little, so very little, life.” (Conversations With Kafka, by G. Janouch)

For the writer maybe, but for the reader?

And can one have the will (or wherewithal) to write without first having had it to read?

So, for a writer, laziness in reading is more unpardonable (and limiting) than laziness in writing.