Who needs TV anyway?
These seminars from the European Molecular Biology Laboratory are online and ad-free, from the Prestigious Lecture Series. If you squint you may recognise the most recent is from Sydney Brenner (quoted in my last post, he talks a lot of sense).
Elizabeth Blackburn is in here too, 2009 Nobel laureate who isolated telomerase, which has an interesting story (that’s far from over).
You can change the playback speed too, which is neat if you’re pushed for time. These are almost “old” (from 2011 going back as far as December 2006) but they’ve aged just fine & hey, makes an alternative to cat videos.
The thing is to have no discipline at all. Biology got its main success by the importation of physicists that came into the field not knowing any biology and I think today that’s very important.
I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important.
From “How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner” by Elizabeth Dzeng
The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.
But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow.
…committees who devise huge schemes in order to try to change things… Nothing happens because the committee is a regression to the mean, and the mean is mediocre.
See also: Hard Cases, excerpt from the transcript of the trial of T. Cobley et al. vs the Editors and Publishers of Nascence, before Lord Justice Abel. (A parody by Brenner of journals such as Nature)
Somebody like Turing, who’s done something that just changes the world (for the better by the way)… that’s what matters. And we don’t currently do a good enough job of pointing out, of recognising or rewarding it.
…Biologic-based computing is a double-edged sword. The speed of course is minuscule, the density is incredible.
We have chips that are synaptic, architected very much like the human synapses in the brain. Not biologic, but computing using biological architecture is a middle ground making tremendous progress. I’m not entirely convinced that we’re going to have wet biology in the near future that we can get to pan out… but we certainly are using elements of biologics.
We are so far more efficient than any machine out there that we’re clearly missing something. It’s not an on/off binary system, but a multi-state system with connectivity on incredible orders of magnitude. The fan-out of the human brain compared to that on a transistor is probably thousands to millions of times greater depending upon where you are in the cortex. There’s obviously an architectural discontinuity and we have obviously not sorted it out to that level.
I don’t agree with his idea that scientists ought be “rock stars”; pop culture warps science enough already as Ken Weiss brought up in a recent post which says what a lot of us must be thinking by now.
The lecture, held in Turing’s birthplace of London, was screened live at my University in Manchester, where he worked from 1948, when he joined Max Newman, assisting in development of the Manchester computers and developed an interested in mathematical biology (morphogenesis; pattern formation in development). His paper predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.
⧞ Alan Turing (1952) The chemical basis of morphogenesis. Phil. Trans. Royal Soc. Ldn., Biol. Sci., 237(641) 37-72
Philip Ball wrote a fantastic piece on Turing patterns and the uses they’ve found in recent times back in 2012, and followed it up this week with a short article on computational chemicals — which are “enabling purely chemical systems to mimic intelligent behaviours”.
There is, as Ball notes, a pleasing symmetry that his work is being explored as the basis of a kind of chemical computer given that computation was what had earned him most renown.
Turing proposed that the embryo becomes patterned into regions with different anatomical fates by chemical substances called morphogens (literally ‘shape-formers’) which diffuse through cells and tissues. He imagined them as catalysts that react to produce other reagents, some of which will ultimately govern the destiny of cells. Turing was deliberately vague about what the morphogens are ‒ they could be hormones, perhaps, or genes. (It wasn’t yet clear, a year or so before Watson and Crick’s seminal paper on the structure of DNA, what genes were or how they were encoded in the chromosomes). The key point is that the morphogens diffuse and react with one another: his scheme is what is now known as a reaction-diffusion system.
Turing presented a mathematical analysis of how, under certain conditions, the interacting morphogens could give rise to ‘blobs’ of different chemical composition as they drift through a uniform system, even sketching a 2D “dappled pattern” which he calculated his scheme might produce. He did this by “manual computation”, then the only way to crunch numbers.
It later transpired that Turing’s mechanism isn’t necessary for symmetry breaking of a fertilized egg. Instead, the symmetry is disrupted from the outset by maternal proteins diffusing from one side of the embryo. Yet as chemist Patrick De Kepper of the University of Bordeaux points out, the real triumph of Turing’s paper was to show that “no vitalist principle is required for biological development ‒ ordinary physical and chemical laws could do the job.”
“The wave-like spread is comparable with the spread of an infection or of a forest fire“, explains [developmental biologist Hans] Meinhardt. Essential to the pulsed activity of the waves is the fact that once a wavefront has passed through, a region enters a ‘refractory’ period during which it can’t support another wavelike excitation ‒ in the forest-fire analogy, this is the time taken for trees to regrow.
These chemical travelling waves are different from Turing’s stationary patterns, but the general principles of reaction and diffusion are the same. What differs are the relative rates by which the ingredients diffuse. The conncetions between the two systems first began to emerge in the late 1960s from the work of Russian-born Belgian chemist Ilya Prigogine and his coworkers at the University of Brussels. Reaction-diffusion patterns, which Prigogine referred to as ‘dissipative structures’ because they are sustained by dissipation of energy in a non-equilibrium process, formed a central component of the work on non-equilibrium thermodynamics that earned Prigogine a Nobel Prize in chemistry in 1977. It’s often forgotten that Turing himself recognized that under certain conditions systems of three morphogens could produce travelling chemical waves in his scheme. Meinhardt has shown that an activator-inhibitor scheme with a third morphogen that creates short-ranged but long-lasting inhibition can reproduce the kinds of complex patterns seen on some mollusc shells, which are in effect frozen traces of two-dimensional travelling waves on the rim of the growing shell.
This is illustrated in the image above, taken from Meinhardt’s 2009 tome dedicated to the topic.
Quorum, the Latin genitive of qui — “of whom” — is an archaic political term referring to the minimum number of votes for consensus (often, though not always, a majority). Quorum sensing is a way that bacteria coordinate social behaviour, and came as a challenge to the model of bacteria as asocial, single-celled organisms upon its discovery by Bonnie Bassler (you can watch her talk about the work here).
Many Gram-negative bacteria use N-acyl homoserine lactones (AHLs) as signal molecules for quorum sensing, Gram-positive bacteria using oligopeptides as their ‘autoinducers’.
Being able to gauge how many others are in a population permits bacteria certain behaviors or responses only viable when carried out in synchrony by a large population, such as the production and secretion of pathogenic toxins.
The basic principle of this type of cell-cell signaling is that all bacteria of the same species universally secrete a small molecule (unique to the species) at a basal level. All bacteria also possess a unique cell surface receptor (which may also be a transcription factor), that initiates a signaling cascade that results in the transcription of specific genes. These genes are linked to behaviors only successful when the bacteria are in a large enough population.
Since the discovery of this method of intra-species communication further research has found that bacteria also have similar mechanism in place for inter-species communication, and it’s hoped that once sufficiently well understood, we might be able to develop alternative antibiotics with very different mechanisms of activity to those in use today.
Given that it’s non-bactericidal, shutting off the quorum signal (“quorum quenching”) would create a lower selective pressure and thus avoid any resistant mutants rendering our drugs harmless.
The AHLs used by Gram-negative bacteria interact with LuxR-type receptors to modulate virulence, trigger biofilm formation, as well as communicate stress and other phenotypes between bacteria in a colony.
A new paper from the University of Wisconsin-Madison describes how a n→π* orbital interaction is the source of a peculiar feature of these molecules, which could be exploited by quenchers.
The preorganization of two carbonyl groups due to the constraint of an intervening ring can enhance an n→π* interaction. We realized that the γ-lactone of an AHL restricts its ψ dihedral angle (Ni‑Cαi‑C′i‑Ni+1) and that amidic resonance restricts its ω dihedral angle (Cαi‑1‑C′i‑1‑Ni‑Cαi), leaving only a single unconstrained bond between the two carbonyl groups. In this sense, an AHL is analogous to a proline residue, which has a restricted ϕ dihedral angle (C′i‑1‑Ni‑Cαi‑C′i) and has a strong tendency to form an Oi‑1···C′i=Oi n→π* interaction. Thus, we suspected that AHLs, like proline residues, could be predisposed to form an n→π* interaction.
In an n→π* interaction, the filled lone pair (n) of one C=O group interpenetrates the empty π orbital of another, releasing energy and thus causing attraction between the groups.
The overlap is at its most effective when the oxygen of the lone pair donor forms a sub-Van der Waals contact (inter-atomic d < 3.22 Å), with the carbon of the acceptor carbonyl along the Bürgi‑Dunitz trajectory for nucleophilic addition (95° < θ < 125°).
As these are relatively weak interactions, their inﬂuence is often realized only in systems in which carbonyl groups are in close proximity, as they are in proteins, peptides, peptoids [poly-N-substituted glycines], polyesters, and some small molecules.
The n→π* interaction “engenders pyramidalization of the acceptor carbonyl carbon toward the donor oxygen” (measured as a distortion parameter Θ).
In N-trimethylacetyl homoserine lactone, we observed substantial pyramidalization (Θ = 2.7°) of the acceptor carbonyl toward the donor, in accord with that observed for other molecules with conﬁrmed n→π* interactions. Distortion of the carbonyl carbon toward the n→π* is strong evidence of an attractive interaction; otherwise, distortion would likely occur away from the short contact so as to reduce unfavorable Pauli repulsion.
Using natural bond order (NBO) analysis and using second-order perturbation theory*, they evaluated the interaction energy to be 0.64 kcal mol-1, larger than that for a proline residue, consistent with the carbonyl group of an ester being a better acceptor than that of an amide.
*A perturbation theory is an approach used to approximate the Schrödinger equation by finding a simpler but related problem that can be more easily solved (by considering how differences between the two problems change the Hamiltonian and thereby affect the solutions). It’s only really appropriate with small differences, naturally.
Steric effects were noted as important factors, and the key finding is that this interaction in particular contributes to the conformation of an important signalling molecule, and further that this allows for modulation by choice of N-acyl substituents.
Moreover, γ-lactones undergo hydrolysis, which would eliminate an AHL’s signalling.
An n→π* interaction increases the energy of the acceptor π* orbital and thereby reduces the electrophilicity of the carbonyl group of the γ-lactone. Thus, the n→π* interaction of AHLs could protect their γ-lactones against hydrolysis.
Since any weakening of this interaction should lead to a greater affinity for the receptor, we now have a much better idea of how synthetic AHLs should be designed, including the perhaps counterintuitive idea suggested to “strengthen this interaction, decrease the rate of hydrolysis and endow AHLs with a longer biological half-life”.
✾ Newberry and Raines (2014) A key n→π* interaction in N-acyl homoserine lactones. Chemical Biology, in press. doi:10.1021/cb500022u
Instead of applying observation to the things we wished to know, we have chosen rather to imagine them. Advancing from one ill founded supposition to another, we have at last bewildered ourselves amidst a multitude of errors. These errors becoming prejudices, are, of course, adopted as principles, and we thus bewilder ourselves more and more. The method, too, by which we conduct our reasonings is as absurd; we abuse words which we do not understand, and call this the art of reasoning. When matters have been brought this length, when errors have been thus accumulated, there is but one remedy by which order can be restored to the faculty of thinking; this is, to forget all that we have learned, to trace back our ideas to their source, to follow the train in which they rise, and, as my Lord Bacon says, to frame the human understanding anew.
This remedy becomes the more difficult in proportion as we think ourselves more learned. Might it not be thought that works which treated of the sciences with the utmost perspicuity, with great precision and order, must be understood by every body? The fact is, those who have never studied any thing will understand them better than those who have studied a great deal, and especially than those who have written a great deal.
…But, after all, the sciences have made progress, because philosophers have applied themselves with more attention to observe, and have communicated to their language that precision and accuracy which they have employed in their observations: In correcting their language they reason better.
Étienne Bonnot de Condillac
Transcribed in the Preface to Lavoisier’s Elements (1799)
The scale-free research citation network
In network science, a scale-free network is one whose degree distribution follows a power-law (≈ exponentially). This is best understood with some real-world examples, such as the graph above, taken from a joint report on International comparative performance of the UK research base from Elsevier and the UK government’s dept. for business, innovation & skills.
As Philip Ball wrote in Nature log(100000) years ago in his piece “Why we should love logarithms”,
Power laws have been discovered not only for landslides and solar flares but for many aspects of human culture: word-use frequency, say, or size-frequency relationships of wars, towns and website connections.
The scale-free network has been proposed to come about from so-called rich get richer processes (Barabási and Albert, 1999). The pair noted in Science that real-world network distribution was not like the Poisson distribution reported by earlier researchers (below), which has a sigmoidal cumulative distribution ∫ (in comparison to the distribution exemplified above)
It’s important to note that network theories aren’t empirical, and one of the criticisms of these scale-free models is that when one looks at a complex system, these power-law distributions show up a lot. Does that mean they’re fundamental?
Instead, it may be due to complex systems simply being composed of heterogeneous elements, which will have different properties. Opponents to this model say that this is not necessarily a structural feature of the system by design, but rather just because we are observing a very diverse system that it appears this way.
Vilfredo Pareto, aristocrat and Italian economist, had left a career of two decades in railway engineering for the “mathematical beauty of Newtonian physics” and was devoted to rendering economics an exact science, describable by laws to match those of Newton’s Principia.
His three-volume Trattato di Sociologia Generale continues to be a source of inspiration for theorists in socioeconomics today. His relevance comes from an observation made while gardening; he saw that 80% of his peas were produced by just 20% of peapods. Likewise, 80% of Italy’s land was owned by 20% of the populace. As Barabasi puts it in his 2002 text Linked, “in most cases, four-fifths of our efforts are largely irrelevant”. Barabasi gives an 80:38 figure for scientific citations, but the Elsevier-BIS report finds pretty much exactly the Pareto Principle’s approximation.
Though it might be tempting to infer the 80/20 rule applies to just about anything, that would be a gross overstatement. In reality all systems following Pareto’s Law are a bit special. What sets them apart is a property that plays a key role in understanding complex networks as well.
— Barabasi, A. (2002) Linked: The New Science of Networks. p.66
Most networks in nature follow a bell curve, while a histogram that follows a power-law is a continually decreasing curve, implying that many small events coexist with a few large events. These large events would be forbidden on a bell curve, that is, on an uncollaborative, random network.
Viewing citations as links between nodes of a network (the publications), studies have found consistent power law distributions for the number of nodes with exactly k links, with an exponent between 2 and 3.
Power laws mathematically formulate the fact that in most real networks, the majority of nodes have only a few links and that these numerous tiny nodes co-exist with a few big hubs, nodes with an anomalously high number of links. The few links connecting the smaller nodes to each other are not sufficient to ensure that the network is fully connected. This function is secured by the relatively rare hubs that keep real networks from falling apart.
— Ibid, p.70
Are the networks of researchers we observe in citations the product of social networks, journal reach etc., or of the interconnectedness of the knowledge the publications describe?
There is no characteristic number of citations for a publication - the network is scale-free.
Recently, NIH (the American funding agency for research in the life sciences) announced it would experiment with a programme to fund “high-risk, high-reward” grants: under categories of Pioneer, New Innovator, Transformative Research and Early Independence.
The organisation calls this “balancing our portfolio”, and reflects the tactics used by the UK’s Wellcome Trust, and HHMI. The idea is that funding ‘people not projects’ increases research output, although the effect this has on health outcomes is debated.
There is much debate about the article-level, journal-level and researcher-level metrics used to measure performance and the I-word (impact factor would be obtained roughly from reading off the citations against articles at the 50% mark on a graph such as in the main image above).
The suggestion that these scale-free networks may look different to this unhelpfully blanket distribution is beginning to be addressed. The UK’s MRC has just announced new funding to assess the economic impact of research aiming to find ways of giving the citation measure more value in real terms:
Understanding the relative valuations of research impact: Applying best-worst scaling experiments to survey the public and biomedical/health researchers (Dr Jonathan Grant – RAND Europe)
This project will examine how those who benefit from medical research - the general population - and researchers assess the value of research impact on the economy and society. Estimating the relative value of research impacts to different stakeholders will lead to a better understanding of impact and ultimately could allow peer reviewers to consider public preferences for different types of impact in their decision making when assessing research applications for funding.
The sources for the citation graph are
✦ Bornmann, L., et al. (2011) Mapping excellence in the geography of science: An approach based on Scopus data. Journal of Informetrics 5(4) pp. 537-546
✦ Bornmann, L. & Marx, W. (2013) How good is research really? Measuring the citation impact of publications with percentiles increases correct assessments and fair comparisons. EMBO Reports 14(3) pp. 226-230.
The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined… this combinatory play seems to be the essential feature in productive thought before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.
Albert Einstein in a letter to Jacques Hadamard
Hadamard was a French mathematician who made major contributions in number theory, complex function theory, differential geometry and partial differential equations.
A select few concepts in life (its mathematical underpinnings included) are best understood graphically, rather than analytically.
From the website of Ronald D. Kriz (Envisioning Scientific Information © Three Visual Methods: Envisioning Gradients, Function Extraction, and Tensor Glyphs)
Often scientists (i.e. Gibbs, Maxwell, Einstein, Feynman) reported that visual thinking occurred before formalizing their ideas into words or symbolic script (equations). Their “productive thought” was first to imagine their functional relationship of physical properties as a “combinatory play” of images.