Membrane enzymes ‘stop and frisk’ proteins indiscriminately
In a fascinating new study, researchers report on rhomboid proteases, a class of enzymes present in almost all species whose active site is buried in the lipid bilayer, leading them to cleave other membrane proteins within their transmembrane domains.
How catalysis works within the viscous, water-excluding, two-dimensional membrane is unknown.
Ultimately, a single proteolytic event within the membrane normally takes minutes. Rhomboid intramembrane proteolysis is thus a slow, kinetically controlled reaction not driven by transmembrane protein-protein affinity. These properties are unlike those of other studied proteases or membrane proteins but are strikingly reminiscent of one subset of DNA-repair enzymes.
This new finding that an enzyme shows no affinity (“or little, if any meaningful affinity”) for its substrate is remarkable — enzymes are usually thermodynamically controlled, not kinetically.
Naturally the authors went out of their way to ensure this matched in vivo conditions. After having done so with a host of different biochemical techniques including FRET (schematic shown below), they examined proteolysis in detergent micelles (artificial membranes) with another neat technique called equilibrium gel filtration, more common in pharmacological studies, with which Kd can accurately be measured.
The “subset of DNA repair enzymes” alluded to in the excerpt above are DNA glycosylases, which catalyse the first step in base excision repair (aka AP [apurinic/apyrimidinic] repair, where a base pair is missing on the DNA).
DNA glycosylases remove damaged bases from DNA using an intriguing mechanism that involves two different enzyme sites. Nucleotides flipped out of a DNA double helix first interact with an “interrogation site” on the DNA glycosylase. Importantly, a damaged base is not bound with high affinity per se; instead, it is able to spend more time in the dynamic, extrahelical state and thus stay longer in the interrogation complex. This longer residence allows the base to translocate to a second, deeper site—the excision site—where the glycosidic linkage is clipped to excise the base from DNA. The discriminatory mechanism is therefore rate governed, with a minor contribution from binding affinity to the damaged base itself. The second key property of these DNA glycosylases is a slow kcat because it ensures that catalysis is slower than the residence time of natural bases, kinetically protecting them from hydrolysis.
These striking parallels suggest that low substrate affinity and slow rate of rhomboid proteolysis are not defects but, rather, features of this enzyme system.
The group hypothesise that perhaps rhomboid proteases also use an analagous “interrogation” site to kinetically discriminate substrate from otherwise. The suggestion they outline below is portrayed in the diagram above for a helical domain (red).
Although the gate has been viewed simply as a point of substrate entry, the crevice created by gate opening, which is stable in the membrane, may actually be an “interrogation” site. Like with DNA glycosylases, this site is physically separated from the deeper active site, which would force transmembrane helices to reside in the unwound state to reach the catalytic residues for proteolysis to ensue, instead of returning laterally to the membrane. Our recent spectroscopy analysis of substrates revealed that they form inherently less-stable helices than nonsubstrates, which suggests that they would spend more time in the unfolded state and thus reach the inner “scission site” from the “interrogation site.”..
It is thus tempting to speculate that the primordial function of rhomboid proteases was to patrol the membrane looking for unfolded membrane proteins to cleave as a repair mechanism analogous to how DNA glycosylases patrol the genome for damaged bases.
Rhomboid proteases resemble water-filled barrels with side gates for protein entry, as shown in the main image, above. This work suggests that while both stable and unstable proteins enter the side gate, stable proteins likely remain intact and drift back out into the membrane in the 2 and a half minutes it takes the protease to strike.
Unstable proteins however will start to wobble in the watery interior (giving away their instability, previously masked by the stabilising, viscous membrane) and struggle to exit the enzyme’s “barrel” — hence they will get snipped and recycled.
While the authors remain cautious, the existence of iRhoms (not mentioned in the piece), pseudoproteases that instead of cleaving proteins, hijack the ER-assisted degradation (ERAD) pathway to release them for proteasomal destruction, suggests another level of control if indeed this is (or was evolutionarily) a repair mechanism.
Whether weak binding at transmembrane sites is important for catalysis inside the membrane, or a deliberate specialization by this class of enzymes, remains to be determined.
Dickey et al. (2013) Proteolysis inside the Membrane Is a Rate-Governed Reaction Not Driven by Substrate Affinity. Cell, 155(6) 1270-81
Comparison of the arrangement of CA (capsid) domains in mature and immature retrovirus lattices
As is best seen through representations of symmetry, during maturation the conformations of proteins in HIV-1's capsid undergo significant rotational and translational changes.
CA is one of 3 conserved folded domains in the HIV-1 Gag protein — MA (matrix), CA (capsid) and NC (nucleocapsid).
Assembly of an infectious virion proceeds in two stages. In the first stage, Gag oligomerization into a hexameric protein lattice leads to the formation of an incomplete, roughly spherical protein shell that buds through the plasma membrane of the infected cell to release an enveloped immature virus particle.
In the second stage, cleavage of Gag by the viral protease leads to rearrangement of the particle interior, converting the non-infectious immature virus particle into a mature infectious virion.
The immature Gag shell acts as the pivotal intermediate in assembly and is a potential target for anti-retroviral drugs both in inhibiting virus assembly and in disrupting virus maturation. However, detailed structural information on the immature Gag shell has not previously been available. For this reason it is unclear what protein conformations and interfaces mediate the interactions between domains and therefore the assembly of retrovirus particles, and what structural transitions are associated with retrovirus maturation.
The 8-Å resolution structure permits the derivation of a pseudo-atomic model of CA in the immature retrovirus, which defines the protein interfaces mediating retrovirus assembly.
In the accompanying paper, the group showed last year that the interactions stabilising the immature and mature viruses are almost completely distinct.
See also: Briggs and Krausslich (2011) The Molecular Architecture of HIV. Journal of Molecular Biology, 410(4) 491-500
▲ Bharat et al. (2012) Structure of the immature retroviral capsid at 8 Å resolution by cryo-electron microscopy. Nature, 487(7407) 385-9
The Ries group at EMBL Heidelberg, Germany are planning “to image the whole proteome of budding yeast with a resolution of ~20 nm in dual-color and 3D.” (!)
The combination of optical superresolution microscopy with dynamic microscopy techniques such as fluorescence correlation spectroscopy (FCS) or single particle tracking (SPT) bears great potential in relating structure, localisation and function, as does the combination with electron microscopy to add molecular specificity to the ultra structure.
A friend got me into super-resolution microscopy recently, some very exciting work around.
⇢ Take a look at the lab’s website here
Breaking Barriers and Building Bridges in Cancer Research
Last night saw another Oxbridge Biotech Roundtable chapter meeting in Manchester, where a panel from academia, industry and the charities that go between discussed the perks and pitfalls of collaborations in clinical oncology.
OBR is a student-led, transatlantic body which holds debates such as these to foster discussion and networks in biotech. While cancer biology isn’t something I’m looking to specialise in, as a topic it inevitably pervades most aspects of study. I did get the feeling that some of these points wouldn’t make it into the pages of scholarly lit., so it gave a valuable new perspective for a life sciences undergraduate.
Firstly the OBR’s own blog (/Review) and upcoming “bio”-entrepeneurship competition (/Onestart) were mentioned, the latter offering a £100,000 prize, and open to anyone interested in consultancy (whether already in a team or looking to join one locally).
Dr Catharine West, Professor of Radiation Biology at the University of Manchester, was the sole advocate from academia, though contrary to stereotypes there seemed to be very little friction between these two supposedly very separate groups. Dr West did feel the need to point out at several times the motivation of the drug companies was “not to find a cure for cancer, but to make money”.
Similarly, Dr. Alastair Greystoke, a clinical lecturer in oncology at the University and a local hospital, provided many details of the day-to-day conflicts and solutions in academic-industrial partnerships.
It was mentioned that a new report this week into the drug market had estimated the development cycle of a single cancer drug at 14 years, and (dependent on its progression to the clinic) anywhere from £300m to £3bn, averaging a sizeable £1.3 bn. An increasing number of new entities are being approved by the FDA in the U.S., although still only a fraction of the activity in the 90’s.
Issues at present are apparently in identifying new targets, and the “high attrition rate” — 60% of candidate drugs fail in phase II and ⅓ in phase III trials. It was alleged that this may lead drug companies to become risk-averse and institutionalise a “fail-fast” mentality.
The main points discussed were “multi-modality” therapies (chemo., radiation etc. even at preclinical stages) and publication bias.
Drug-radiation interactions were agreed to be important in understanding the real mechanisms at play in these multi-modality treatment schemes, as all too often the hypothesis supposedly under investigation was not, and the results simply ignored, which Dr. Donald Ogilvie, Head of Drug Development at Manchester Cancer Research Centre felt was wasted opportunity to “dig deeper into the data”.
On publication bias, Dr. Minesh Jobanputra, Global Medical Affairs Physician at GlaxoSmithKline countered the suggestion that “Big Pharma” was inherently untrustworthy and hiding negative results with the point that there are huge flaws in the reproducibility of academic research (the AmGen researcher who published in 2012 that only ~10% of preclinical trial results were reproducible was mentioned).
» The author later published his “six red flags for suspect work” in the same journal
Likewise, the irreproducibility of siRNA studies was highlighted as a barrier to drug development (on this particular issue, reproducibility is hindered when it’s difficult to say precisely what effect the siRNA has).
Alistair Greystoke pointed out that phase I trials were nearly always published, but condensed down to around 3 pages which effectively constitutes a subtle barrier to transparency, though one implemented by journals. As a result, only those with access to the Investigator’s Brochure (or even knowing such a document exists) would be able to look any further in than these 3 pages.
Responses to perceived bias centred around improving publication of clinical trial data: as is being done voluntarily by GSK (up to Phase IV, including patient-level data), and is increasingly required by regulatory bodies.
Dr Ogilvie was keen to stress that cancer as a disease is being “hyperfractionated” into many subtypes thanks to sequencing data, which shows tumours are heterogeneous within homogeneous histological samples, and even within a patient. On a similar note, tumour evolution inferred by single-cell sequencing was seen as the next step in developing cancer treatments to be most effective.
Again, challenging the stereotype of conflicting academic vs. industrial motivations, such partnerships are determined in advance with contracts specifying “research tools” - see for example Oxford University’s guidelines (known as material transfer agreements).
Lastly, an interesting audience question brought up the Structural Genomics Consortium, which has pushed forward research on medically relevant molecules with protein structures identifying well-annotated epigenetic targets. One such identified molecule, JQ1 (a bromodomain inhibitor) makes for some interesting further reading, with diverse proposed applications from a male contraceptive to cancer treatment.
With a mix of industry buzzwords and clinical terminology I wasn’t always able to follow the conversation, but some final topics being touted as the way forward included:
♣ Removing boundaries: Enabling the movement of researchers across disciplines and the industry-academia interface. RSC News, Jan 2013
Why do some genes seem to respond in a ‘digital’, on/off manner to a graded signal, while others produce an ‘analog’, graded response?
A new study in Current Biology suggests that the DNA-binding properties of transcription factors (TFs) can exert a strong influence on the response patterns of gene networks.
Many cellular processes operate in an “analog” regime in which the magnitude of the response is precisely tailored to the intensity of the stimulus. In order to maintain the coherence of such responses, the cell must provide for proportional expression of multiple target genes across a wide dynamic range of induction states.
Cell–cell signaling pathways, such as the pheromone response system used for mating in yeast, as well as developmental patterning pathways in animals and plants, often incorporate negative feedback mechanisms (feedback loops) to produce measured transcriptional responses, in proportion to the stimulus.
In this study, genetically engineered yeast expressing a constitutively active form of Msn2 under the control of a hormone-inducible promoter were used to examine transcriptional output from the gene.
By measuring the activity of Msn2-regulated promoters (driving Yellow Fluorescent Protein reporters), they obtained an idea of the relationship between TF activity and gene expression in vivo.
In many contexts, cells respond to stimuli with decisive commitment to a phenotypic state. It is usually assumed that genes that drive this transition exist in just two alternative functional states, active and inactive, and that the switch between these two states occurs decisively in a narrow regime of transcription factor concentration. In addition to making decisive choices, cells and organisms also need to continuously adjust to the demands of their environment. Systems that are responsible for homeostasis or graded developmental processes may need to operate in an “analog” regime where a response is tailored to the exact intensity of the stimulus in order to prevent deleterious over- or under-reactions.
After failing to observe a great deal of cooperativity, the authors suggest that the low afﬁnity of Msn2 for its DNA binding motifs in vitro, as well as the abundance of Msn2 motifs in the yeast genome, together reduce the level of occupancy of Msn2 at its target promoters, via weak TF–DNA interactions and the proteins’ sequestration by surplus genomic binding sites.
They tested this hypothesis by making targeted substitutions to the Msn2 DNA-binding domain, altering its DNA-binding speciﬁcity. The mutant Msn2 showed higher in vitro binding afﬁnity for its new preferred sequence motif than the wild-type protein has for its binding site. This newly preferred binding site is rare in the yeast genome, hence results in non-linear, saturating (but non-sigmoidal) transcriptional output [yellow fluorescence].
…While our studies were focused on the steady-state input/output relationship between Msn2 and its target genes, the linearity we uncovered has important implications for the dynamic operation of the system. This aspect needs to be considered in the context of the PKA regulatory system, which produces dynamic changes in Msn2 activity in response to stress. Low-affinity interactions of Msn2 with DNA induce rapid (subsecond) binding and unbinding of Msn2 to its response elements, allowing for rapid dynamic control of gene expression if the rate-limiting step for promoter activation is transcription factor binding.
In addition to enabling precise tuning of gene expression to the state of the environment, this strategy ensures colinear activation of target genes, allowing for stoichiometric expression of large groups of genes without extensive promoter tuning. Furthermore, such a strategy enables precise modulation of the activity of any given promoter by addition of binding sites without altering the qualitative relationship between different genes in a regulon. This feature renders a given regulon highly “evolvable.”
…As a result, we anticipate this strategy to be a recurring feature of many systems where homeostatic regulation is important.
Stewart-Ornstein et al. (2013) Msn2 Coordinates a Stoichiometric Gene Expression Program. Current Biology, 23(23) 2336–2345
Kaltashov and Eyles, Mass Spectrometry in Structural Biology and Biophysics: Architecture, Dynamics and Interaction of Biomolecules (2012)
2nd ed., pp. 31–32
The C complex spliceosome
The spliceosome is a multimegadalton RNA-protein complex that removes noncoding “introns” from nascent pre-mRNAs (either after transcription or cotranscriptionally). This molecular machine is recruited to splice sites and carries out splicing using a collection of small nuclear ribonucleoproteins, snRNPs (pronounced “snurps”) and various additional protein factors.
The dynamic nature makes it extremely hard to decipher exactly what takes place – that is, to crystallise the assembly for X ray analysis. The 2004 paper that gave the best image yet of the situation (results shown above) summed up these difficulties as “purification of stable complexes to compositional homogeneity and assessment of conformational heterogeneity”.
In size and complexity, spliceosomes are comparable to ribosomes. A major difference between them, however, is in the number of stable subcomplexes observable for each. Whereas ribosomes comprise a small and a large subunit that remain relatively unchanged when they join to form an 80S monosome, the spliceosome consists of five major subunits (the spliceosomal snRNPs U1, U2, U4, U5 and U6) that assemble in vitro via a complex series of interactions into at least four stable intermediates (E/CC, A, B and C). Separable by native gel electrophoresis or density gradient sedimentation, these subcomplexes probably represent key structural transitions that become stabilized as the splicing machinery proceeds through splice site identification and then catalysis in vivo. Whereas the ribosome has been subject to intense structural analysis via electron microscopy and X-ray crystallography, the three-dimensional structure of the spliceosome remains largely undetermined.
As also highlighted by Alberts et al. in Molecular Biology of the Cell (5e, p.350), Jurica et al. write that
The intermediate states identified as distinct complexes in vitro may not occur in isolation in vivo. Rather, they may manifest themselves as a mere tightening and loosening of interactions, as well as structural rearrangements, within a larger complex that never truly resolves into smaller subcomplexes. Thus, in contrast to the ribosome, the entire spliceosome may not encompass a single rigid structure, but instead may contain several loosely associated protein factors that gather around a more rigid core complex with a defined composition and structure.
The group used cryogenic conditions for their electron microscopy studies, freezing particles sandwiched between two layers of thin carbon.
The C complex spliceosome analyzed here consists of >100 different protein factors and 4 RNA molecules, and still represents only a transient intermediate in the working cycle of an unusually dynamic molecular machine.
As a result of this difficulty in imaging a large, dynamic spliceosome, the precise timeline of splicing events is not greatly clear (for instance the U4 snRNP is lost during the 2nd transesterification step which joins the 2 exons together, but exactly when it’s lost is undetermined).
One way to look at complex processes like these is using the Gene Ontology (one of many biological ontologies in the Open Biomedical Ontologies project). These can be viewed at ols.wordvis.com, here for example with the query GO:0000350.
The step shown here is the formation of the C2 complex spliceosome, that is the Catalytic complex in the 2nd transesterification reaction, in which the 3′-OH of exon 1 nucleophilically attacks the phosphodiester bond at the splice acceptor, making a new bond before the ligated exons are released as a final product of the splicing pathway.
All this complexity may seem ugly, but the constant RNA-RNA rearrangements (in which the formation of one RNA-RNA interaction requires another be disrupted) allows constant verification of RNA sequences before the chemical reaction proceeds – making the process more accurate.
⅓ of all hereditary diseases are currently thought to involve errors in splicing.
Although safeguarded by nonsense-mediated decay (the cell’s quality control mechanism), errors still arise through
- mutations in a splice site causing loss of function ⇢ a premature stop codon and subsequent loss of an exon / inclusion of an intron
- mutation of a splice site reducing specificity ⇢ variation in splice location, and insertion/deletion of amino acids, or as is most common, disruption of the reading frame)
- displacement of a splice site ⇢ inclusion or exclusion of more RNA than expected gives longer or shorter exons
Jurica et el (2004) Three-dimensional structure of C complex spliceosomes by electron microscopy. Nature Structural & Molecular Biology, 11(3) 265–269
Since plenty of the followers of this blog seem to be other students, and with a new exam season approaching, I thought I’d post the findings from a 50+ page review on the study techniques that do & don’t work, displayed through some neat bar graphs… Click ‘em to see full size.
The least effective techniques:
- Highlighting and underlining textbooks and other materials
- Keywords & mnemonics
- Imagery use for text learning
… the moderately effective:
- Elaborative interrogation — “why” questions to make connections between old & new material
- Self-explanation — pretty self-explanatory…
- Interleaved practice — mixing different kinds of problems or material in one study session
… the highly effective:
- Practice testing
- Distributed practice
It’s worth noting that some of the techniques in the moderate efficacy category were placed there due to a lack of evidence rather than evidence to the contrary - so it’s worth keeping an open mind…
An “obvious” outcome was that results are better if you take an examination immediately after the course than during it (what this really shows is that performance is better on shorter courses, due to mental fatigue I suppose)
Practice performance is a good indicator of test performance for interleaved practice but not blocked practice (i.e. mix it up!) This is quite scary
Dunlosky et al (2013) Improving Students’ Learning With Effective Learning Techniques Promising Directions From Cognitive and Educational Psychology. Psychological Science in the Public Interest, 14: 4—58
Data overload? Check out this press release instead:
» Which Study Strategies Make The Grade (ScienceDaily, January 2013)