Page 11234..»

Category : Protein Folding

2 tricked-out pies to be thankful for: pear with cranberries and pumpkin with ginger praline – The Gazette

By JeanMarie Brownson, Chicago Tribune

Homemade pie fillings prove easy. Crust not so much. Practice makes perfect. With every pie, our skills improve. Its an acquired art to turn out flaky, beautiful crust. My mother regularly reminds us of her early crust adventures many of which ended in the garbage can. No worries, she says, the crust ingredients cost far less than the filling.

So, when time allows, we practice making pie crust hearing her voice remind us to use a gentle hand when gathering the moist dough into a ball and later when rolling it out. Mom always uses a floured rolling cloth on the board and on the rolling pin. These days, I prefer to roll between two sheets of floured wax paper. We factor in plenty of time to refrigerate the dough so its at the perfect stage for easy rolling. The chilly rest also helps prevent shrinkage in the oven.

Ive been using the same pie dough recipe for years now. I like the flakiness I get from vegetable shortening and the flavor of butter, so I use some of each fat. A bit of salt in the crust helps balance sweet fillings. The dough can be made in a few days in advance. Soften it at room temperature until pliable enough to roll, but not so soft that it sticks to your work surface.

Of course, when pressed for time, I substitute store-bought frozen crusts. Any freshly baked pie with or without a homemade crust, is better than most store-bought versions.

I read labels to avoid ingredients I dont want to eat or serve my family. Im a fan of Trader Joes ready-to-roll pie crusts sold in freezer cases both for their clean ingredient line and the baked flavor. The 22-ounce box contains two generous crusts (or one bottom crust and one top or lattice). Other brands, such as Simple Truth Organics, taste fine, but at 15 ounces for two crusts, are best suited for smaller pies. Wewalka brand sells one 9-ounce crust thats relatively easy to work with. Always thaw according to package directions and use a rolling pin or your hands to repair any rips that may occur when unwrapping.

Double-crust fruit pies challenge us to get the thickener amount just right so the pie is not soupy when cut. Im a huge fan of instant tapioca in most fruit pies because it thickens the juices without adding flavor or a cloudy appearance. In general, I use one tablespoon instant tapioca for every two cups cut-up raw fruit.

Pretty, lattice-topped pies have the added benefit of allowing more fruit juice evaporation while the pie bakes. Precooking the fruit for any pie helps ensure that the thickener is cooked through; I especially employ this technique when working with cornstarch or flour-thickened pie fillings. This also allows the cook to work in advance a bonus around the busy holiday season.

ARTICLE CONTINUES BELOW ADVERTISEMENT

We are loving the combination of juicy, sweet Bartlett pears with tart cranberries for a gorgeous pie with hues of pink; a few crisp apples and chewy dried cranberries contribute contrasting textures. Feel free to skip the lattice work and simply add a top crust; pierce the top crust in several places with a fork to allow steam to escape. For added flavor and texture, I brush the top crust with cream and sprinkle it generously with coarse sugar before baking.

The nut-free ginger praline recipe is a riff on a longtime favorite pumpkin pie from Jane Salzfass Freiman, a former Chicago Tribune recipe columnist. She taught us to gussy up the edge of pumpkin pie with nuts, brown sugar and butter. We are employing store-bought ginger snap cookies and crystallized ginger in place of pecans for a spicy, candied edge to contrast the creamy pie interior. Think of this pie as all your favorite coffee shop flavors in one pumpkin pie spice and gingerbread, topped with whipped cream.

Happy pie days, indeed.

PEAR, DOUBLE CRANBERRY AND APPLE LATTICE PIE

Prep: 1 hour

Chill: 1 hour

Cook: 1 hour

Makes: 8 to 10 servings

1 recipe double crust pie dough, see recipe

2 1/2 pounds ripe, but still a bit firm, Bartlett pears, about 6

1 1/2 pounds Honeycrisp or Golden Delicious apples, about 4

2 cups fresh cranberries, about 8 ounces

3 tablespoons unsalted butter

3/4 cup sugar

3 tablespoons cornstarch

1 cup (4 ounces) dried cranberries

1/2 teaspoon grated fresh orange zest

1/8 teaspoon salt

Cream or milk, coarse sugar (or turbinado sugar)

Make pie dough and refrigerate it as directed. Working between two sheets of floured wax paper, roll out one disk into a 12-inch circle. Remove the top sheet of wax paper and use the bottom sheet to flip the crust into a 10-inch pie pan. Gently smooth the crust into the pan, without stretching it. Roll the edge of the dough under so it sits neatly on the edge of the pie dish. Refrigerate.

Roll the second disk of pie dough between the sheets of floured wax paper into an 11-inch circle. Slide onto a cookie sheet and refrigerate while you make the filling.

Peel and core the pears. Slice into 1/4-inch wide wedges; put into a bowl. You should have 6 generous cups. Peel and core the apples. Cut into 3/4-inch chunks; you should have about 3 1/2 cups. Add to the pears. Stir in fresh cranberries.

Heat butter in large deep skillet over medium-high until melted; add pears, apples and fresh cranberries. Cook, stirring, until nicely coated with butter, about 2 minutes. Cover and cook to soften the fruit, 3 minutes. Add sugar and cornstarch; cook and stir until glazed and tender, about 5 minutes. Remove from heat; stir in dried cranberries, orange zest and salt. Spread on a rimmed baking sheet; cool to room temperature. While the fruit mixture cools, heat oven to 425 degrees.

Pile the cooled fruit into the prepared bottom crust. Use a very sharp knife to cut the rolled top crust into 18 strips, each about 1/2 inch wide. Place 9 of those strips over the fruit filling positioning them about 1/2 inch apart. Arrange the other 9 strips over the strips on the pie in a diagonal pattern. (If you want to make a woven lattice, put one strip of dough over the 9 strips on the pie and weave them by lifting up and folding to weave them together.)

Crimp the edge of the bottom crust and the lattice strips together with your fingers. Use a fork to make a decorative edge all the way around the pie. Use a pastry brush to brush each of the strips and the edge of the pie with cream. Sprinkle strips and the edge with the coarse sugar.

ARTICLE CONTINUES BELOW ADVERTISEMENT

Place pie on a baking sheet. Bake at 425 degrees, 25 minutes. Reduce oven temperature to 350 degrees. Use strips of foil to lightly cover the outer edge of the pie. Continue baking until the filling is bubbling hot and the crust richly golden, about 40 minutes more.

Cool completely on a wire rack. Serve at room temperature topped with whipped cream or ice cream. To rewarm the pie, simply set it in a 350-degree oven for about 15 minutes.

Nutrition information per serving (for 10 servings): 540 calories, 24 g fat, 11 g saturated fat, 34 mg cholesterol, 80 g carbohydrates, 43 g sugar, 4 g protein, 270 mg sodium, 7 g fiber

DOUBLE CRUST PIE DOUGH

Prep: 20 minutes

Chill: 1 hour

Makes: Enough for a double crust 10-inch pie

This is our familys favorite pie crust for ease of use with a flaky outcome. We use vegetable shortening for easy dough handling and maximum flakiness; unsalted butter adds rich flavor.

2 1/2 cups flour

1 tablespoon sugar

1 teaspoon salt

1/2 cup unsalted butter, very cold

1/2 cup trans-fat free vegetable shortening, frozen

Put flour, sugar and salt into a food processor. Pulse to mix well. Cut butter and shortening into small pieces; sprinkle them over the flour mixture. Pulse to blend the fats into the flour. The mixture will look like coarse crumbs.

Put ice cubes into about 1/2 cup water and let the water chill. Remove the ice cubes and drizzle about 6 tablespoons of the ice water over the flour mixture. Briefly pulse the machine just until the mixture gathers into a dough.

Dump the mixture out onto a sheet of wax paper. Gather into two balls, one slightly larger than the other. (Use this one later for the bottom crust.) Flatten the balls into thick disks. Wrap in plastic and refrigerate until firm, about 1 hour. (Dough will keep in the refrigerator for several days.)

Nutrition information per serving (for 10 servings): 291 calories, 20 g fat, 8 g saturated fat, 24 mg cholesterol, 25 g carbohydrates, 1 g sugar, 3 g protein, 235 mg sodium, 1 g fiber

GINGER PRALINE PUMPKIN PIE

Prep: 40 minutes

Cook: 1 1/2 hours

Makes: 8 servings

ARTICLE CONTINUES BELOW ADVERTISEMENT

Prebaking the crust helps ensure the proper texture in the finished pie. You can replace the ginger snap cookies here with just about any spice cookie; I also like to use speculoos cookies or homemade molasses cookies. The recipe calls for canned pumpkin pie mix, which has sugar and spice already.

Half recipe double crust pie dough, see recipe

Filling

2 large eggs

1 can (30 ounces; or two 15-ounce cans) pumpkin pie mix (with sugar and spices)

1/2 teaspoon each ground: cinnamon, ginger

1/4 teaspoon ground cloves

2/3 cup heavy whipping cream

2 tablespoons dark rum or 1 teaspoon vanilla

Topping

3 tablespoons butter, softened

2 tablespoons dark brown sugar

1/4 cup finely chopped crystallized ginger, about 1 1/2 ounces

1 cup roughly chopped or broken ginger snap cookies, about 2 ounces or 12 cookies

Whipped cream for garnish

For crust, heat oven to 425 degrees. Roll pie dough between 2 sheets of floured wax paper to an 11-inch circle. Remove the top sheet of paper. Use the bottom sheet to help you flip the dough into a 9-inch pie pan. Gently ease the dough into the pan, without stretching it; roll the edge of the dough under so it sits neatly on the edge of the pie dish; flatten attractively with a fork.

Line the bottom of the pie crust with a sheet of foil; fill the foil with pie weights or dried beans. Bake, 8 minutes. Remove the beans using the foil to lift them out of the crust. Return pie crust to the oven; bake until light golden in color, about 2 minutes. Cool. (Crust can be prebaked up to 1 day in advance; store in a cool, dry place.)

Reduce oven temperature to 350 degrees. For filling, whisk eggs in a large bowl until smooth. Whisk in pumpkin mix, cinnamon, ginger and cloves until smooth. Whisk in cream and rum or vanilla.

For topping, mix soft butter and brown sugar in a small bowl until smooth. Stir in crystallized ginger; gently stir in the cookies to coat them with the butter mixture.

Carefully pour pie filling into cooled crust. Set the pie pan on a baking sheet; slide into the center of the oven. Bake, 40 minutes. Remove pie from oven. Gently distribute the topping evenly around the outer rim of the pie, near the crust. Return the pie to the oven; bake until a knife inserted near the center is withdrawn clean, about 40 more minutes. Cool on a wire rack. Serve cold or at room temperature with whipped cream.

Nutrition information per serving: 481 calories, 27 g fat, 13 g saturated fat, 96 mg cholesterol, 58 g carbohydrates, 9 g sugar, 6 g protein, 433 mg sodium, 9 g fiber

Go here to see the original:

2 tricked-out pies to be thankful for: pear with cranberries and pumpkin with ginger praline - The Gazette

Recommendation and review posted by Alexandra Lee Anderson

Argonne Researchers to Share Scientific Computing Insights at SC19 – HPCwire

Nov. 15, 2019 The Supercomputing 2019 (SC19) conference, scheduled for November 1722 in Denver, will bring together the global high-performance computing (HPC) community, including researchers from the U.S. Department of Energys (DOE) Argonne National Laboratory, to share scientific computing advances and insights with an eye toward the upcoming exascale era.

Continuing the laboratorys long history of participation in the SC conference series, more than 90 Argonne researchers will contribute to conference activities and studies on topics ranging from exascale computing and big data analysis to artificial intelligence (AI) and quantum computing.

SC is a tremendous venue for Argonne to showcase its innovative uses of high-performance and data-intensive computing to advance science and engineering, said Salman Habib, director of Argonnes Computational Science division. We look forward to sharing our research and connecting with and learning from our peers, who are also working to push the boundaries of extreme-scale computing in new directions.

As the future home to one of the worlds first exascale supercomputers Aurora, an Intel-Cray machine scheduled to arrive in 2021 Argonne continues to drive the development of technologies, tools and techniques that enable scientific breakthroughs on current and future HPC systems. To fully realize exascales potential, the laboratory is creating an environment that supports the convergence of AI, machine learning and data science methods alongside traditional modeling and simulation-based research.

We are seeing rapid advances in the application of deep learning and other forms of AI to complex science problems at Argonne and across the broader research community, said Ian Foster, director of Argonnes Data Science and Learning division, Argonne Distinguished Fellow and also the Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago. SC provides a forum for the community to get together and share how these methods are being used to accelerate research for a diverse set of applications.

The laboratorys conference activities will include technical paper presentations, talks, workshops, birds of a feather sessions, panel discussions and tutorials. In addition, Argonne will partner with other DOE national laboratories to deliver talks and demos at the DOEs conference booth (#925). Some notable Argonne activities are highlighted below. For the full schedule of the laboratorys participation in the conference, visitArgonnes SC19 webpage.

DOE Booth Talk: Scientific Domain-Informed Machine Learning

Argonne computer scientist Prasanna Balaprakash will delivera talk at the DOE boothon the laboratorys pivotal research with machine learning. His featured talk will cover Argonnes efforts to develop and apply machine learning approaches that enable data-driven discoveries in a wide variety of scientific domains, including cosmology, cancer research and climate modeling. Balaprakash will highlight successful use cases across the laboratory, as well as some exciting avenues for future research.

In Situ Analysis for Extreme-Scale Cosmological Simulations

Argonne physicist and computational scientist Katrin Heitmann will deliver thekeynote talkat the In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization (ISAV 2019) workshop. Her talk will cover the development of in situ analysis capabilities (i.e., data analysis while a simulation is in progress) for the Hardware/Hybrid Accelerated Cosmology Code, which has been used to carry out several extreme-scale simulations on DOE supercomputers. Heitmann will discuss the current limitations of her teams on the fly analysis tool suite and how they are developing solutions to prepare for the arrival of DOEs forthcoming exascale systems.

Full-State Quantum Circuit Simulation by Using Data Compression

Researchers from Argonne and the University of Chicago will present atechnical paperon their work to develop a new quantum circuit simulation technique that leverages data compression, trading computation time and fidelity to reduce the memory requirements of full-state quantum circuit simulations. Demonstrated on Argonnes Theta supercomputer, the teams novel approach provides researchers and developers with a platform for quantum software debugging and hardware validation for modern quantum devices that have more than 50 qubits.

Deep Learning on Supercomputers

Argonne scientists will have a strong presence at the Deep Learning on Supercomputers workshop. Co-chaired by Foster, the workshop provides a forum for researchers working at the intersection of deep learning and HPC. Argonne researchers are part of a multi-institutional team that will present DeepDriveMD: Deep-Learning-Driven Adaptive Molecular Simulations for Protein Folding. The study provides a quantitative basis by which to understand how coupling deep learning approaches to molecular dynamics simulations can lead to effective performance gains and reduced times-to-solution on supercomputing resources.

A team of researchers from Argonne and the University of Chicago will present Scaling Distributed Training of Flood-Filling Networks on HPC Infrastructure for Brain Mapping at the Deep Learning on Supercomputers workshop. The teams paper details an approach to improve the performance of flood-filling networks, an automated method for segmenting brain data from electron microscopy experiments. Using Argonnes Theta supercomputer, the researchers implemented a new synchronous and data-parallel distributed training scheme that reduced the amount of time required to train the flood-filling network.

Priority Research Directions for In Situ Data Management: Enabling Scientific Discovery from Diverse Data Sources

At the 14th Workshop on Workflows in Support of Large-Scale Science (WORKS19), Argonne computer scientist Tom Peterkaskeynote talkwill cover six priority research directions that highlight the components and capabilities needed for in situ data management to be successful for a wide variety of applications. In situ analysis tools can enable discoveries from a broad range of data sources HPC simulations, experiments, scientific instruments and sensor networks by helping researchers minimize data movement, save storage space and boost resource efficiency, often while simultaneously increasing scientific precision.

The Many Faces of Instrumentation: Debugging and Better Performance using LLVM in HPC

Argonne computational scientist Hal Finkel will deliver a keynote talk on the open-source LLVM compiler infrastructure at theWorkshop on Programming and Performance Visualization Tools (ProTools 19). LLVM, winner of the 2012 ACM Software System Award, has become an integral part of the software-development ecosystem for optimizing compilers, dynamic-language execution engines, source-code analysis and transformation tools, debuggers and linkers, and a host of other programming language- and toolchain-related components. Finkel will discuss various LLVM technologies, HPC tooling use cases, challenges in using these technologies in HPC environments, and interesting opportunities for the future.

About Argonne National Laboratory

Argonne National Laboratoryseeks solutions to pressing national problems in science and technology. The nations first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance Americas scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed byUChicago Argonne, LLCfor theU.S. Department of Energys Office of Science.

About the U.S. Department of Energys Office of Science

The U.S. Department of Energys Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science

Source: Jim Collins, Argonne National Laboratory

See the article here:

Argonne Researchers to Share Scientific Computing Insights at SC19 - HPCwire

Recommendation and review posted by Alexandra Lee Anderson

How to Make the Most of Your Old Tech – New York Magazine

Photo: Billy H.C. Kwok/Bloomberg via Getty Images

If youre the kind of tech person who likes to stay on the cutting edge, the kind who upgrades their phone every year or rotates laptops with a significant frequency, then it can be tough to know what you should do with your old stuff. I mean, yeah, you could throw it out or try to sell it on eBay, but you can also put it to work in other useful ways. Here are some ideas.

Remote control

A lot of people, myself included, use their phones to control their TV and stereo. You can cast stuff video to your TV or music to a smart speaker via functions like Airplay or Chromecast. If you want to unplug and put your smartphone away when youre at home, having a separate device for a remote control is extremely helpful.

Gaming

Yeah, smartphones are good for mobile games, but theres some cool stuff on the horizon as well. Companies like Google and Microsoft are working on cloud gaming, letting you (theoretically) play console-quality games on your phone by streaming video from a remote server. Newer Android phones and iPhones with iOS 13 are compatible with Xbox and PlayStation controllers, so its worth keeping an old smartphone around if youre interested in checking it out.

Webcam

If youre worried about security but not worried enough to buy a dedicated camera, you can use an old Android phone instead. Most guides recommend an app called IP Webcam to get it working. Once its set up, you can check in on things while youre out.

Spare GPS device

Even if you dont have an internet connection, your old smartphones GPS system should still work. Popular apps like Google Maps let you cache navigational data and save it offline, so theres nothing stopping you from keep an old phone in your car just in case.

PC media server

Instead of junking your old PC, set it up as a media server (heres a tutorial) so you can access movies, music, and family photos from any device on your network. It makes it easier to share stuff with your household without manually sending files around.

Participate in a science project

Folding@home is a distributed computing project that simulates protein folding, computational drug design, and other types of molecular dynamics. Its a program that runs in the background on computers and aids medical research. Is your old PC going to cure cancer? Probably not. But itll help in a small way.

Strip it for parts

This is a long shot but theres a healthy aftermarket for old PC parts, in part because you cant really buy individual components directly from manufacturers. You can also do it just to see if you can. iFixIt sells plenty of ready-made kits for any assembly/disassembly project you might pursue.

Give it to your parents

This ones pretty obvious but you can save yourself some time and headaches by taking an old computer or phone, setting it up yourself, and then giving it to your parents. Theyre not going to replace that old Gateway on their own!

Paperweight

When the electrical grid eventually fails and we return to using paper for everything, youre gonna need something to hold all of that paper down.

Daily news about the politics, business, and technology shaping our world.

Continued here:

How to Make the Most of Your Old Tech - New York Magazine

Recommendation and review posted by Alexandra Lee Anderson

That Junk DNA Is Full of Information! – Advanced Science News

Share

Share

Email

It should not surprise us that even in parts of the genome where we dont obviously see a functional code (i.e., one thats been evolutionarily fixed as a result of some selective advantage), there is a type of code, but not like anything weve previously considered as such. And what if it were doing something in three dimensions as well as the two dimensions of the ATGC code? A paper just published in BioEssays explores this tantalizing possibility

Isnt it wonderful to have a really perplexing problem to gnaw on, one that generates almost endless potential explanations. How about what is all that non-coding DNA doing in genomes?that 98.5% of human genetic material that doesnt produce proteins. To be fair, the deciphering of non-coding DNA is making great strides via the identification of sequences that are transcribed into RNAs that modulate gene expression, may be passed on transgenerationally (epigenetics) or set the gene expression program of a stem cell or specific tissue cell. Massive amounts of repeat sequences (remnants of ancient retroviruses) have been found in many genomes, and again, these dont code for protein, but at least there are credible models for what theyre doing in evolutionary terms (ranging from genomic parasitism to symbiosis and even exploitation by the very host genome for producing the genetic diversity on which evolution works); incidentally, some non-coding DNA makes RNAs that silence these retroviral sequences, and retroviral ingression into genomes is believed to have been the selective pressure for the evolution of RNA interference (so-called RNAi); repetitive elements of various named types and tandem repeats abound; introns (many of which contain the aforementioned types of non-coding sequences) have transpired to be crucial in gene expression and regulation, most strikingly via alternative splicing of the coding segments that they separate.

Still, theres plenty of problem to gnaw on because although we are increasingly understanding the nature and origin of much of the non-coding genome and are making major inroads into its function (defined here as evolutionarily selected, advantageous effect on the host organism), were far from explaining it all, andmore to the pointwere looking at it with a very low-magnification lens, so to speak. One of the intriguing things about DNA sequences is that a single sequence can encode more than one piece of information depending on what is reading it and in which direction viral genomes are classic examples in which genes read in one direction to produce a given protein overlap with one or more genes read in the opposite direction (i.e., from the complementary strand of DNA) to produce different proteins. Its a bit like making simple messages with reverse-pair words (a so-called emordnilap). For example: REEDSTOPSFLOW, which, by an imaginary reading device, could be divided into REED STOPS FLOW. Read backwards, it would give WOLF SPOTS DEER.

Now, if it is of evolutionary advantage for two messages to be coded so economically as is the case in viral genomes, which tend to evolve towards minimum complexity in terms of information content, hence reducing necessary resources for reproductionthen the messages themselves evolve with a high degree of constraint. What does this mean? Well, we could word our original example message as RUSH-STEM IMPEDES CURRENT, which would embody the same essential information as REED STOPS FLOW. However, that message, if read in reverse (or even in the same sense, but in different chunks) does not encode anything additional that is particularly meaningful. Probably the only way of conveying both pieces of information in the original messages simultaneously is the very wording REEDSTOPSFLOW: thats a highly constrained system! Indeed, if we studied enough examples of reverse-pair phrases in English, we would see that they are, on the whole, made up of rather short words, and the sequences are missing certain units of language such as articles (the, a); if we looked more closely, we might even detect a greater representation than average of certain letters of the alphabet in such messages. We would see these as biases in word and letter usage that would, a priori, allow us to have a stab at identifying such dual-function pieces of information.

Now lets return to the letters, words, and information encoded in genomes. For two distinct pieces of information to be encoded in the same piece of genetic sequence we would, similarly, expect the constraints to be manifest in biases of word and letter usagethe analogies, respectively, for amino acid sequences constituting proteins, and their three-letter code. Hence a sequence of DNA can code for a protein and, in addition, for something else. This something else, according to Giorgio Bernardi, is information that directs the packaging of the enormous length of DNA in a cell into the relatively tiny nucleus. Primarily it is the code that guides the binding of the DNA-packaging proteins known as histones. Bernardi refers to this as the genomic codea structural code that defines the shape and compaction of DNA into the highly-condensed form known as chromatin.

But didnt we start with an explanation for non-coding DNA, not protein-coding sequences? Yes, and in the long stretches of non-coding DNA we see information in excess of mere repeats, tandem repeats and remnants of ancient retroviruses: there is a type of code at the level of preference for the GC pair of chemical DNA bases compared with AT. As Bernardi reviews, synthesizing his and others groundbreaking work, in the core sequences of the eukaryotic genome, the GC content in structural organizational units of the genome termed isochores increased during the evolutionary transition between so-called cold-blooded and warm-blooded organisms. And, fascinatingly, this sequence bias overlaps with sequences that are much more constrained in function: these are the very protein-coding sequences mentioned earlier, and theymore than the intervening non-coding sequencesare the clue to the genomic code.

Protein-coding sequences are also packed and condensed in the nucleus particularly when theyre not in use (i.e., being transcribed, and then translated into protein) but they also contain relatively constant information on precise amino acid identities, otherwise they would fail to encode proteins correctly: evolution would act on such mutations in a highly negative manner, making them extremely unlikely to persist and be visible to us. But the amino acid code in DNA has a little catch that evolved in the most simple of unicellular organisms (bacteria and archaea) billions of years ago: the code is partly redundant. For example, the amino acid Threonine can be coded in eukaryotic DNA in no fewer than four ways: ACT, ACC, ACA or ACG. The third letter is variable and hence available for the coding of extra information. This is exactly what happens to produce the genomic code, in this case creating a bias for the ACC and ACG forms in warm-blooded organisms. Hence, the high constraint on this additional codewhich is also seen in parts of the genome that are not under such constraint as protein-coding sequencesis imposed by the packaging of protein-coding sequences that embody two sets of information simultaneously. This is analogous to our example of the highly-constrained dual-information sequence REEDSTOPSFLOW.

Importantly, however, the constraint is not as strict as in our English language example because of the redundancy of the third position of the triplet code for amino acids: a better analogy would be SHE*ATE*STU* where the asterisk stands for a variable letter that doesnt make any difference to the machine that reads the three-letter component of the four-letter message. One could then imagine a second level of information formed by adding D at these asterisk points, to make SHEDATEDSTUD (SHE DATED STUD). Next imagine a second reading machine that looks for meaningful phrases of a sensitive nature containing a greater than average concentration of Ds. This reading machine carries a folding machine with it that places a kind of peg at each D, kinking the message by 120 degrees in a plane. a point where the message should be bent by 120 degrees in the same plane, we would end up with a more compact, triangular, version. In eukaryotic genomes, the GC sequence bias proposed to be responsible for structural condensation extends into non-coding sequences, some of which have identified activities, though less constrained in sequence than protein-coding DNA. There it directs their condensation via histone-containing nucleosomes to form chromatin.

Figure. Analogy between condensation of a word-based message and condensation of genomic DNA in the cell nucleus. Panel A: Information within information, a sequence of words with a variable fourth space which, when filled with particular letters, generates a further message. One message is read by a three-letter reading machine; the other by a reading machine that can interpret information extending to the 4thvariableposition of the sequence. The second reader recognizes sensitive information that should be concealed, and at the points where a D appears in the 4th position, it folds the string of words, hence compressing the sensitive part and taking it out of view. This is an analogy for the principle of genomic 3D compression via chromatin, as depicted in panel B: a fluorescence image (via Fluorescence In-Situ Hybridization FISH) of the cell nucleus. H2/H3 isochores, which increased in GC content during evolution from cold-blooded to warm-blooded vertebrates, are compressed into a chromatin core, leaving L1 isochores (with lower GC content) at the periphery in a less-condensed state. The genomic code embodied in the high-GC tracts of the genome is, according to Bernardi [1], read by the nucleosome-positioning machinery of the cell and interpreted as sequence to be highly compressed in euchromatin. Acknowledgements: Panel A: concept and figure production: Andrew Moore; Panel B: A FISH pattern of H2/H3 and L1 isochores from a lymphocyte induced by PHAcourtesy of S. Sacconeas reproduced in Ref. [1].]

These regions of DNA may then be regarded as structurally important elements in forming the correct shape and separation of condensed coding sequences in the genome, regardless of any other possible function that those non-coding sequences have: in essence, this would be an explanation for the persistence in genomes of sequences to which no function (in terms of evolutionarily-selected activity), can be ascribed (or, at least, no substantial function).

A final analogythis time much more closely relatedmight be the very amino acid sequences in large proteins, which do a variety of twists, turns, folds etc. We may marvel at such complicated structures and ask but do they need to be quite so complicated for their function? Well, maybe they do in order to condense and position parts of the protein in the exact orientation and place that generates the three-dimensional structure that has been successfully selected by evolution. But with a knowledge that the genomic code overlaps protein coding sequences, we might even start to become suspicious that there is another selective pressure at work as well

Andrew Moore, Ph.D.Editor-in-Chief, BioEssays

Reference:

1. G.Bernardi. 2019. The genomic code: a pervasive encoding/moulding ofchromatin structures and a solution of the non-coding DNA mystery. BioEssays41:12. 1900106

See the original post here:

That Junk DNA Is Full of Information! - Advanced Science News

Recommendation and review posted by Alexandra Lee Anderson

IBM vs. Google and the race to quantum supremacy – Salon

Googles quantum supremacy claim has now been disputed by its close competitor IBM. Not because Googles Sycamore quantum computers calculations are wrong, but because Google had underestimated what IBMs Summit, the most powerful supercomputer in the world, could do. Meanwhile, Googles paper, which had accidentally been leaked by a NASA researcher, has now been published in the prestigious science journal Nature. Googles claims are official now, and therefore can be examined in the way any new science claim should be examined: skeptically until all the doubts are addressed.

Previously, I have covered what quantum computing is, and in this article, I will move on to the key issue of quantum supremacy, the claim that IBM has challenged and what it really means. IBM concedes that Google has achieved an important milestone, but does not accept that it has achieved quantum supremacy.

IBM refuted Googles claim around the same time as Googles Nature paper was published. Google had claimed that IBMs supercomputer, Summit, would take 10,000 years to solve the problem Googles Sycamore had solved in a mere 200 seconds. IBM showed that Summit, with clever programming and using its huge disk space, could actually solve the problem in only 2.5 days. Sycamore still beat Summit on this specific problem by solving it 1,100 times faster, but not 157 million times faster, as Google had claimed. According to IBM, this does not establish quantum supremacy as that requires solving a problem a conventional computer cannot solve in a reasonable amount of time. Two and a half days is reasonable, therefore according to IBM quantum supremacy is yet to be attained.

The original definition of quantum supremacy was given by John Preskill, on which he now has second thoughts. Recently he wrote, supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.

Regarding IBMs claim that quantum supremacy has not yet been achieved, Scott Aaronson, a leading quantum computing scientist, wrotethat though Google should have foreseen what IBM has done, it does not invalidate Googles claim. The key issue is not that Summit had a special way to solve the specific quantum problem Google had chosen, but that Summit cannot scale: if Googles Sycamore goes from 53 to 60 qubits, IBM will require 33 Summits; if to 70 Qubits, a supercomputer the size of a city!

Why does Summit have to increase at this rate to match Sycamores extra qubits? To demonstrate quantum supremacy, Google chose the simulation of quantum circuits, which is similar to generating a sequence of truly random numbers. Classical computers can produce numbers that appear to be random, but it is a matter of time before they will repeat the sequence.

The resources disk space, memory, computing power classical computers require to solve this problem, in a reasonable time, increase exponentially with the size of the problem. For quantum computers, adding qubits linearly meaning, simply adding more qubits increases computing capacity exponentially. Therefore, just 7 extra qubits of Sycamore means IBM needs to increase the size of Summit 33 times. A 17-qubit increase of Sycamore needs Summit to increase by thousands of times. This is the key difference between Summit and Sycamore. For each extra qubit, a conventional computer will have to scale its resources exponentially, and this is a losing game for the conventional computer.

We have to give Google the victory here, not because IBM is wrong, but because the principle of quantum supremacy, that a quantum computer can work as designed, solve a specific problem, and beat a conventional computer in computational time has been established. The actual demonstrationa more precise definition of reasonable time and its physical demonstration is only of academic value. If 53 qubits can solve the problem, but with IBMs Summit still in the race, even if much slower, it is just a matter of time before it is well and truly beaten.

Of course, there are other ways that this particular test could fail. A new algorithm can be discovered that solves this problem faster, starting a fresh race. But the principle here is not a specific race but the way quantum computing will scale in solving a certain class of problems that classical or conventional computers cannot.

For problems that do not increase exponentially with size, the classical computers work better, are way cheaper, and do not require near absolute zero temperatures that quantum computers require. In other words, classical computers will coexist with quantum computers and not follow typewriters and calculators to the technology graveyards.

The key issue in creating viable quantum computers should not be confused with a race between classical computers and the new kid on the block. If we see the race as between two classes of computers only in terms of solving a specific problem, we are missing the big picture. It is simply that for classical computers, the solution time for a certain class of problems increases exponentially with the size of the problem, and beyond a certain size, we just cant solve them in any reasonable time. Quantum computers have the potential to solve such large problems requiring exponential computing power. This opens a way to solve these classes of problems other than the iffy route of finding new algorithms.

Are there such problems, and will they yield worthwhile technological applications? The Google problem, computing the future states of quantum circuits, was not chosen for any practical application. It was simply chosen to showcase quantum supremacy, defined as a quantum computer solving a problem that a classical computer cannot solve in a reasonable time.

Recently, a Chinese team led by Pan Jianwei has published a paper that shows another problema Boson sampling experiment with 20 photons can also be a pathway to show quantum supremacy. Both these problems are constructed not to showcase real-world applications, but simply to show that quantum computing works and can potentially solve real-world problems.

What are the classes of problems that quantum computers can solve? The first are those for which the late Nobel laureate Richard Feynman had postulated quantum computers as a simulation of the quantum world. Why do we need such simulations, after all, we live in the macro-world in which quantum effects are not visible? Though such effects may not visible to us, they are indeed all around us and affect us in different ways.

A number of such phenomena arise out of the interaction of the quantum world with the macro-world. It is now clear that using classical computers we cannot simulate, for instance, protein folding, as it involves the quantum world intersecting with the macro-world. A quantum computer could simulate the probability of how many possible ways such proteins could fold and the likely shapes they could take. This would allow us to build not only new materials but also medicines known as biologics. Biologics are large molecules used for treating cancer and auto-immune diseases. They work due to not only their composition but also their shapes. If we could work out their shapes, we could identify new proteins or new biological drug targets; or complex new chemicals for developing new materials. The other examples are solving real-life combinatorial problems such as searching large databases, cracking cryptographic problems, improved medical imaging, etc.

The business world IBM, Google, Microsoft is gung-ho on the possible use of quantum computers for such applications, and that is why they are all investing in it big time. Nature reported that in 2017 and 2018, at least $450 million was invested by venture capital in quantum computing, more than four times more than the preceding two years. Nation-states, notably the United States and China, are also investing billions of dollars each year.

But what if quantum computers do not lead to commercial benefits should we then abandon them? What if they are useful only for simulating quantum mechanics and understanding that world better? Did we build the Hadron Collider investing $13.25 billion, and with an annual running cost of $1 billion only because we expected discoveries that will have commercial value? Or, should society invest in knowing the fundamental properties of space and time including that of the quantum world? Even if quantum computers only give us a window to the quantum world, the benefits would be knowledge.

What is the price of this knowledge?

Go here to read the rest:

IBM vs. Google and the race to quantum supremacy - Salon

Recommendation and review posted by Alexandra Lee Anderson

IBM vs. Google and the Race to Quantum Supremacy – Citizen Truth

Though IBM contests Googles claim of quantum supremacy, it concedes that it passed an important milestone. For the science of computing, that is all that matters.

Googles quantum supremacy claim has now been disputed by its close competitor IBM. Not because Googles Sycamore quantum computers calculations are wrong, but because Google had underestimated what IBMs Summit, the most powerful supercomputer in the world, could do. Meanwhile, Googles paper, which had accidentally been leaked by a NASA researcher, has now been published in the prestigious science journal Nature. Googles claims are official now, and therefore can be examined in the way any new science claim should be examined: skeptically until all the doubts are addressed.

Previously, I have covered what quantum computing is, and in this article, I will move on to the key issue of quantum supremacy, the claim that IBM has challenged and what it really means. IBM concedes that Google has achieved an important milestone, but does not accept that it has achieved quantum supremacy.

IBM refuted Googles claim around the same time as Googles Nature paper was published. Google had claimed that IBMs supercomputer, Summit, would take 10,000 years to solve the problem Googles Sycamore had solved in a mere 200 seconds. IBM showed that Summit, with clever programming and using its huge disk space, could actually solve the problem in only 2.5 days. Sycamore still beat Summit on this specific problem by solving it 1,100 times faster, but not 157 million times faster, as Google had claimed. According to IBM, this does not establish quantum supremacy as that requires solving a problem a conventional computer cannot solve in a reasonable amount of time. Two and a half days is reasonable, thereforeaccording to IBMquantum supremacy is yet to be attained.

The original definition of quantum supremacy was given by John Preskill, on which he now has second thoughts. Recently he wrote, supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.

Regarding IBMs claim that quantum supremacy has not yet been achieved, Scott Aaronson, a leading quantum computing scientist, wrote that though Google should have foreseen what IBM has done, it does not invalidate Googles claim. The key issue is not that Summit had a special way to solve the specific quantum problem Google had chosen, but that Summit cannot scale: if Googles Sycamore goes from 53 to 60 qubits, IBM will require 33 Summits; if to 70 Qubits, a supercomputer the size of a city!

Why does Summit have to increase at this rate to match Sycamores extra qubits? To demonstrate quantum supremacy, Google chose the simulation of quantum circuits, which is similar to generating a sequence of truly random numbers. Classical computers can produce numbers that appear to be random, but it is a matter of time before they will repeat the sequence.

The resourcesdisk space, memory, computing powerclassical computers require to solve this problem, in a reasonable time, increase exponentially with the size of the problem. For quantum computers, adding qubits linearlymeaning, simply adding more qubitsincreases computing capacity exponentially. Therefore, just 7 extra qubits of Sycamore means IBM needs to increase the size of Summit 33 times. A 17-qubit increase of Sycamore needs Summit to increase by thousands of times. This is the key difference between Summit and Sycamore. For each extra qubit, a conventional computer will have to scale its resources exponentially, and this is a losing game for the conventional computer.

We have to give Google the victory here, not because IBM is wrong, but because the principle of quantum supremacy, that a quantum computer can work as designed, solve a specific problem, and beat a conventional computer in computational time has been established. The actual demonstrationa more precise definition of reasonable time and its physical demonstrationis only of academic value. If 53 qubits can solve the problem, but with IBMs Summit still in the race, even if much slower, it is just a matter of time before it is well and truly beaten.

Of course, there are other ways that this particular test could fail. A new algorithm can be discovered that solves this problem faster, starting a fresh race. But the principle here is not a specific race but the way quantum computing will scale in solving a certain class of problems that classical or conventional computers cannot.

For problems that do not increase exponentially with size, the classical computers work better, are way cheaper, and do not require near absolute zero temperatures that quantum computers require. In other words, classical computers will coexist with quantum computers and not follow typewriters and calculators to the technology graveyards.

The key issue in creating viable quantum computers should not be confused with a race between classical computers and the new kid on the block. If we see the race as between two classes of computers only in terms of solving a specific problem, we are missing the big picture. It is simply that for classical computers, the solution time for a certain class of problems increases exponentially with the size of the problem, and beyond a certain size, we just cant solve them in any reasonable time. Quantum computers have the potential to solve such large problems requiring exponential computing power. This opens a way to solve these classes of problems other than the iffy route of finding new algorithms.

Are there such problems, and will they yield worthwhile technological applications? The Google problem, computing the future states of quantum circuits, was not chosen for any practical application. It was simply chosen to showcase quantum supremacy, defined as a quantum computer solving a problem that a classical computer cannot solve in a reasonable time.

Recently, a Chinese team led by Pan Jianwei has published a paper that shows another problema Boson sampling experiment with 20 photonscan also be a pathway to show quantum supremacy. Both these problems are constructed not to showcase real-world applications, but simply to show that quantum computing works and can potentially solve real-world problems.

What are the classes of problems that quantum computers can solve? The first are those for which the late Nobel laureate Richard Feynman had postulated quantum computers as a simulation of the quantum world. Why do we need such simulations, after all, we live in the macro-world in which quantum effects are not visible? Though such effects may not visible to us, they are indeed all around us and affect us in different ways.

A number of such phenomena arise out of the interaction of the quantum world with the macro-world. It is now clear that using classical computers we cannot simulate, for instance, protein folding, as it involves the quantum world intersecting with the macro-world. A quantum computer could simulate the probability of how many possible ways such proteins could fold and the likely shapes they could take. This would allow us to build not only new materials but also medicines known as biologics. Biologics are large molecules used for treating cancer and auto-immune diseases. They work due to not only their composition but also their shapes. If we could work out their shapes, we could identify new proteinsor new biological drug targets; or complex new chemicals for developing new materials. The other examples are solving real-life combinatorial problems such as searching large databases, cracking cryptographic problems, improved medical imaging, etc.

The business worldIBM, Google, Microsoftis gung-ho on the possible use of quantum computers for such applications, and that is why they are all investing in it big time. Nature reported that in 2017 and 2018, at least $450 million was invested by venture capital in quantum computing, more than four times more than the preceding two years. Nation-states, notably the United States and China, are also investing billions of dollars each year.

But what if quantum computers do not lead to commercial benefitsshould we then abandon them? What if they are useful only for simulating quantum mechanics and understanding that world better? Did we build the Hadron Colliderinvesting $13.25 billion, and with an annual running cost of $1 billiononly because we expected discoveries that will have commercial value? Or, should society invest in knowing the fundamental properties of space and time including that of the quantum world? Even if quantum computers only give us a window to the quantum world, the benefits would be knowledge.

What is the price of this knowledge?

This article was produced in partnership by Newsclick and Globetrotter, a project of the Independent Media Institute.

Read more here:

IBM vs. Google and the Race to Quantum Supremacy - Citizen Truth

Recommendation and review posted by Alexandra Lee Anderson

Microprotein ID’d Affecting Protein Folding and Cell Stress Linked to Diseases Like Huntington’s, Study Finds – Huntington’s Disease News

PIGBOS a newly discovered mitochondrial microprotein involved in a cellular stress-response mechanism called unfolded protein response (UPR) might be a treatment target for neurodegenerative diseases likeHuntingtons, a study suggests.

The study, Regulation of the ER stress response by a mitochondrial microprotein, was published in the journal Nature Communications.

Maintenance of protein balance including the production, shaping (folding), and degradation of proteins is essential for a cells function and survival.

Dysfunction in protein balance has been associated with the build-up of toxic protein aggregates and the development of neurodegenerative diseases, including Alzheimers, Parkinsons, and Huntingtons disease.

The endoplasmic reticulum (ER) is a key cellular structure in the production, folding, modification, and transport of proteins. Excessive amounts of unfolded or misfolded proteins (proteins with abnormal 3D structures) in the ER results in ER stress, and the activation of the unfolded protein response (UPR) stress response mechanism, which acts to mitigate damage caused by this protein build-up.

UPR promotes the reduction of protein production and an increase in protein folding and degradation of unfolded proteins in the ER. If this fails to restore cellular balance and prolongs the activation of UPR, cell death is induced.

UPR dysfunction contributes to accumulation of key disease-related proteins, and thus plays an essential role in the [development] of many neurodegenerative disorders, including Alzheimers disease, Parkinsons disease, and Huntingtons disease, the researchers wrote.

During UPR, mitochondria the cells powerhouses are known to provide energy for protein folding in the ER and to activate cell death pathways if the cellular balance is not restored. However, how mitochondria and the ER communicate in this context remains unclear.

Researchers at the Salk Institute for Biological Studies, in California, discovered a mitochondrial microprotein, called PIGBOS, that regulates UPR at the sites of contact between mitochondria and the ER.

While the average human protein contains around 300 amino acids (the building blocks of proteins), microproteins have less than 100 amino acids. Microproteins were only recently found to be functional and important in the regulation of several cellular processes.

By conducting protein-binding experiments, the team found that the 54-aminoacidmicroprotein PIGBOS, present in the outer membrane of mitochondria, interacts with a protein called CLCC1 at the ER-mitochondria contact sites.

CLCC1 whose low levels werepreviously associated with increased UPR and neurodegeneration is found at the portion of the ER that contacts the mitochondria, called mitochondria-associated ER membrane.

Further analyses showed that inducing ER stress in cells genetically modified to lack CLCC1 or PIGBOS increased the levels of UPR-related proteins, while the opposite effect was observed in cells overproducing PIGBOS. Lower levels of PIGBOS were also associated with greater cell death.

Researchers noted that these findings suggest that loss of PIGBOS increases cellular sensitivity to ER stress, which in turn increases [cell death] and links PIGBOS levels to the ability of cells to survive stress, emphasizing that modulating PIGBOS levels can in turn modulate cellular sensitivity towards ER stress.

Results also showed that PIGBOSs UPR regulation is dependent on its interaction with CLCC1, and that modulating the number of ER-mitochondria contacts regulates the levels of PIGBOS-CLCC1 interactions.

These data identified PIGBOS as a [previously] unknown mitochondrial regulator of UPR, and the only known microprotein linked to the regulation of cell stress or inter-organelle signaling, the team emphasized.

These findings may help in developing treatment approaches targeting ER stress and cell death.

Given the importance of UPR in biology and disease, future studies on PIGBOSs role in UPR should afford additional insights and may provide methods for regulating this pathway for therapeutic applications, the researchers concluded.

Total Posts: 79

Ana holds a PhD in Immunology from the University of Lisbon and worked as a postdoctoral researcher at Instituto de Medicina Molecular (iMM) in Lisbon, Portugal. She graduated with a BSc in Genetics from the University of Newcastle and received a Masters in Biomolecular Archaeology from the University of Manchester, England. After leaving the lab to pursue a career in Science Communication, she served as the Director of Science Communication at iMM.

Excerpt from:

Microprotein ID'd Affecting Protein Folding and Cell Stress Linked to Diseases Like Huntington's, Study Finds - Huntington's Disease News

Recommendation and review posted by Alexandra Lee Anderson

Discover: Science is often wrong and that’s actually a really good thing – Sudbury.com

Im a geneticist. I study the connection between information and biology essentially what makes a fly a fly, and a human a human. Interestingly, were not that different. Its a fantastic job and I know, more or less, how lucky I am to have it.

Ive been a professional geneticist since the early 1990s. Im reasonably good at this, and my research group has done some really good work over the years. But one of the challenges of the job is coming to grips with the idea that much of what we think we know is, in fact, wrong.

Sometimes, were just off a little, and the whole point of a set of experiments is simply trying to do a little better, to get a little closer to the answer. At some point, though, in some aspect of what we do, its likely that were just flat out wrong. And thats okay. The trick is being open-minded enough, hopefully, to see that someday, and then to make the change.

One of the amazing things about being a modern geneticist is that, generally speaking, people have some idea of what I do: work on DNA (deoxyribonucleic acid). When I ask a group of school kids what a gene is, the most common answer is DNA. And this is true, with some interesting exceptions. Genes are DNA and DNA is the information in biology.

For almost 100 years, biologists were certain that the information in biology was found in proteins and not DNA, and there were geneticists who went to the grave certain of this. How they got it wrong is an interesting story.

Genetics, microscopy (actually creating the first microscopes), and biochemistry were all developing together in the late 1800s. Not surprisingly, one of the earliest questions that fascinated biologists was how information was carried from generation to generation. Offspring look like their parents, but why? Why your second daughter looks like the postman is a question that came up later.

Early cell biologists were using the new microscopes to peer into the cell in ways that simply hadnt been possible previously. They were finding thread-like structures in the interior of cells that passed from generation to generation, were similar within a species, but different between them. We now know these threads as chromosomes. Could these hold the information that scientists were looking for?

Advances in biochemistry paralleled those in microscopy and early geneticists determined that chromosomes were primarily made up of two types of molecules: proteins and DNA. Both are long polymers (chains) made up of repeated monomers (links in the chains). It seemed very reasonable that these chains could contain the information of biological complexity.

By analogy, think of a word as just a string of letters, a sentence as a chain of words, and a paragraph as a chain of sentences. We can think of chromosomes, then, as chapters, and all of our genetic information what we now call our genome (all our genetic material) as these chapters that make up a novel. The question to those early geneticists, then, was: Which string made up the novel? Was it protein or DNA?

You and I know the answer: DNA. Early geneticists, however, got it wrong and then passionately defended this wrong stance for eight decades. Why? The answer is simple. Protein is complicated. DNA is simple. Life is complicated. The alphabet of life, then, should be complicated and protein fits that.

Proteins are made up of 20 amino acids there are 20 different kinds of links in the protein chain. DNA is made up of only four nucleotides there are only four different links in the DNA chain. Given the choice between a complicated alphabet and a simple one, the reasonable choice was the complicated one, namely protein. But, biology doesnt always follow the obvious path and the genetic material was, and is, DNA.

It took decades of experiments to disprove conventional wisdom and convince most people that biological information was in DNA. For some, it took James Watson and Francis Crick (http://www.pbs.org/wgbh/aso/databank/entries/do53dn.html), using data misappropriated from Rosalind Franklin https://www.nature.com/scitable/topicpage/rosalind-franklin-a-crucial-contribution-6538012/), deciphering the structure of DNA in 1953 to drive the nail in the protein coffin. It just seemed to obvious that protein, with all its complexity, would be the molecule that coded for complexity.

These were some of the most accomplished and thoughtful scientists of their day, but they got it wrong. And thats okay if we learn from their mistakes.

It is too easy to dismiss this example as the foolishness of the past. We wouldnt make this kind of mistake today, would we? I cant answer that, but let me give you another example that suggests we would, and Ill argue at the end that we almost certainly are.

Im an American, and one of the challenges of moving to Canada was having to adapt to overcooked burgers (my mother still cant accept that she cant get her burger medium when she visits). This culinary challenge is driven by a phenomenon that one of the more interesting recent cases of scientists having it wrong and refusing to see that.

In the late 1980s, cows started wasting away and, in the late stages of what was slowly recognized as a disease, acting in such bizarre manner that their disease, bovine spongiform encephalitis, became known as Mad Cow Disease. Strikingly, the brains of the cows were full of holes (hence spongiform) and the holes were caked with plaques of proteins clumped together.

Really strikingly, the proteins were ones that are found in healthy brains, but now in an unnatural shape. Proteins are long chains, but they function because they have complex 3D shapes think origami. Proteins fold and fold into specific shapes. But, these proteins found in sick cow brains had a shape not normally seen in nature; they were misfolded.

Sometime after, people started dying from the same symptoms and a connection was made between eating infected cows and contracting the disease (cows could also contract the disease, but likely through saliva or direct contact, and not cannibalism). Researchers also determined the culprit was consumption only of neural tissue, brain and spinal tissue, the very tissue that showed the physical effects of infection (and this is important).

One of the challenges of explaining the disease was the time-course from infection to disease to death; it was long and slow. Diseases, we knew, were transmitted by viruses and bacteria, but no scientist could isolate one that would explain this disease. Further, no one knew of other viruses or bacteria whose infection would take this long to lead to death. For various reasons, people leaned toward assuming a viral cause, and careers and reputations were built on finding the slow virus.

In the late 1980s, a pair of British researchers suggested that perhaps the shape, the folding, of the proteins in the plaques was key. Could the misfolding be causing the clumping that led to the plaques? This proposal was soon championed by Stanely Prusiner, a young scientist early in his career.

The idea was simple. The misfolded protein was itself both the result and the cause of the infection. Misfolded protein clumped forming plaques that killed the brain tissue they also caused correctly folded versions of the proteins to misfold. The concept was straightforward, but completely heretical. Disease, we knew, did not work that way. Diseases are transmitted by viruses or bacteria, but the information is transmitted as DNA (and, rarely, RNA, a closely related molecule). Disease is not transmitted in protein folding (although in 1963 Kurt Vonnegut had predicted such a model for world-destroying ice formation in his amazing book Cats Cradle).

For holding this protein-based view of infection, Prusiner was literally and metaphorically shouted out of the room. Then he showed, experimentally and elegantly, that misfolded proteins, which he called prions, were the cause of these diseases, of both symptoms and infection.

For this accomplishment, he was awarded the 1997 Nobel Prize in Medicine. He, and others, were right. Science, with a big S, was wrong. And thats okay. We now know that prions are responsible for a series of diseases in humans and other animals, including Chronic Wasting Disease, the spread of which poses a serious threat to deer and elk here in Ontario.

Circling back, the overcooked burger phenomenon is because of these proteins. If you heat the prions sufficiently, they lose their unnatural shape all shape actually and the beef is safe to eat. A well-done burger will guarantee no infectious prions, while a medium one will not. We dont have this issue in the U.S. because cows south of the border are less likely to have been infected with the prions than their northern counterparts (or at least Americans are willing to pretend this is the case).

Where does this leave us? To me, the take-home message is that we need to remain skeptical, but curious. Examine the world around you with curious eyes, and be ready to challenge and question your assumptions.

Also, dont ignore the massive things in front of your eyes simply because they dont fit your understanding of, or wishes for, the world around you. Climate change, for example, is real and will likely make this a more difficult world for our children. Ive spent a lot of time in my career putting together models of how the biological world works, but I know pieces of these models are wrong.

I can almost guarantee you that I have something as fundamentally wrong as those early geneticists stuck on protein as the genetic material of cells or the prion-deniers; I just dont know what it is. Yet.

And, this situation is okay. The important thing isnt to be right. Instead, it is to be open to seeing when you are wrong.

Dr. Thomas Merritt is the Canada Research Chair in Genomics and Bioinformatics at Laurentian University.

See original here:

Discover: Science is often wrong and that's actually a really good thing - Sudbury.com

Recommendation and review posted by Alexandra Lee Anderson

Rett Syndrome Tied to Altered Protein Levels in Brain in Early Study – Rett Syndrome News

Lack of a functional MeCP2 protein leads toRett syndrome by altering levels of brain proteins associated with energy metabolism and protein regulation, a study in a mouse model suggests.

These altered protein levels might also predict Rett syndromes progression, the investigators said.

The study, Brain protein changes in Mecp2 mouse mutant models: Effects on disease progression of Mecp2 brain specific gene reactivation, was published in theJournal of Proteomics.

Rett syndrome is caused by mutations in the MECP2gene that result in a missing functional MeCP2 protein, a regulator of gene expression. Despite prior studies in animal models, little research has focused on the effects of MeCP2 deficiency in the levels of other proteins in the brain, as well as in Rett syndromes progression.

Researchers from Italy used a mouse model of Rett to address this gap. They did a proteomic analysis of the brains of mice both before and after they developed symptoms, and compared the data to controls withoutMECP2mutations. (Proteomics is the large-scale study of proteins, conducted to draw more global conclusions than possible if assessing proteins one-by-one.)

Results showed abnormal levels of 20 brain proteins in symptomatic mice with Rett syndrome. Twelve of these proteins were overproduced, while eight were at lower levels compared to non-diseased control mice.

Notably, eight (40%) of these 20 proteins were involved in energy metabolism (the process by which cells get energy), and six (30%) were involved in proteostasis, which refers to cellular processes to ensure proper production and folding of proteins.

Presymptomatic mice showed abnormal levels in 18 proteins; 10 at low levels and 8 at high levels compared to controls. Similar to symptomatic mice, these proteins were primarily involved in energy metabolism and proteostasis.

The team then looked at mice that had been engineered to turn the MECP2 gene on in the brain, which was associated with mild symptoms and a longer life than otherwise expected.

By comparing animals lacking functional MeCP2 to mice with so-called MECP2 gene reactivation, the researchers worked to identify the proteins most directly impacted by missing MeCP2.

They found 12 proteins whose levels were normalized by gene reactivation. Seven of these proteins were at low levels and five at high levels without functional MeCP2 protein. Again, most were associated with energy metabolism and proteostasis, while two proteins were involved in how cells respond to oxidants reactive molecules that can damage DNA and cellular structures that is called redox regulation.

Only two of these 12 proteins, PYL2 and SODC, had been previously associated with Rett syndrome via earlier animal model studies that recorded altered levels in the brain.

Our findings suggest that RTT [Rett syndrome] is characterized by a complex metabolic dysfunction strictly related to energy metabolism, proteostasis processes pathways and redox regulation mechanisms, the researchers wrote.

Alteration in the evidenced cellular processes, brain pathways and molecular mechanisms [suggest] the possibility of the use of proteins as predictive biomarkers, they added.

Marisa holds an MS in Cellular and Molecular Pathology from the University of Pittsburgh, where she studied novel genetic drivers of ovarian cancer. She specializes in cancer biology, immunology, and genetics. Marisa began working with BioNews in 2018, and has written about science and health for SelfHacked and the Genetics Society of America. She also writes/composes musicals and coaches the University of Pittsburgh fencing club.

Read this article:

Rett Syndrome Tied to Altered Protein Levels in Brain in Early Study - Rett Syndrome News

Recommendation and review posted by Alexandra Lee Anderson

Bulls-Eye: Imaging Technology Could Confirm When a Drug Is Going to the Right Place – On Cancer – Memorial Sloan Kettering

Summary

Doctors and scientists from Memorial Sloan Kettering report on an innovative technique for noninvasively watching where a targeted therapy is going in the body. It also allows them to see how much of the drugreaches the tumor.

Targeted therapy has become an important player in the collection of treatments for cancer. But sometimes its difficult for doctors to determine whether a persons tumor has the right target or how much of a drug is actually reaching it.

A multidisciplinary team of doctors and scientists from Memorial Sloan Kettering has discovered an innovative technique for noninvasively visualizing where a targeted therapy is going in the body. This method can also measure how much of it reaches the tumor. What makes this development even more exciting is that the drug they are studying employs an entirely new approach for stopping cancer growth. The work was published on October 24 in Cancer Cell.

This paper reports on the culmination of almost 15 years of research, says first author Naga Vara Kishore Pillarsetty, a radiochemist in the Department of Radiology. Everything about this drug from the concept to the clinical trials was developed completely in-house at MSK.

Our research represents a new role for the field of radiology in drug development, adds senior author Mark Dunphy, a nuclear medicine doctor. Its also a new way to provide precision oncology.

Our research represents a new role for the field of radiology in drug development.

The drug being studied, called PU-H71, was developed by the studys co-senior author Gabriela Chiosis. Dr. Chiosis is a member of the Chemical Biology Program in the Sloan Kettering Institute. PU-H71 is being evaluated in clinical trials for breast cancer and lymphoma, and the early results are promising.

We always hear about how DNA and RNA control a cells fate, Dr. Pillarsetty says. But ultimately it is proteins that carry out the functions that lead to cancer. Our drug is targeting a unique network of proteins that allow cancer cells to thrive.

Most targeted therapies affect individual proteins. In contrast, PU-H71 targets something called the epichaperome. Discovered and named by Dr. Chiosis, the epichaperome is a communal network of proteins called chaperones.

Chaperone proteins help direct and coordinate activities in cells that are crucial to life, such as protein folding and assembly. The epichaperome, on the other hand, does not fold. It reorganizes the function of protein networks in cancer, which enables cancer cells to survive under stress.

Previous research from Dr. Chiosis and Monica Guzman of Weill Cornell Medicine provided details on how PU-H71 works. The drug targets a protein called the heat shock protein 90 (HSP90). When PU-H71 binds to HSP90 in normal cells, it rapidly exits. But when HSP90 is incorporated into the epichaperome, the PU-H71 molecule becomes lodged and exits more slowly. This phenomenon is called kinetic selectivity. It helps explain why the drug affects the epichaperome. It also explains why PU-H71 appears to have fewer side effects than other drugs aimed at HSP90.

At the same time, this means that PU-H71 works only in tumors where an epichaperome has formed. This circumstanceled to the need for a diagnostic method to determine which tumors carry the epichaperome and, ultimately, who might benefit from PU-H71.

Communal Behavior within Cells Makes Cancers Easier to Target

Findings about proteins called molecular chaperones are shedding new light on possible approaches to cancer treatment.

In the Cancer Cell paper, the investigators report the development of a precision medicine tactic that uses a PET tracer with radioactive iodine. It is called [124I]-PU-H71 or PU-PET. PU-PET is the same molecule as PU-H71 except that it carries radioactive iodine instead of nonradioactive iodine. The radioactive version binds selectively to HSP90 within the epichaperome in the same way that the regular drug does. Ona PET scan, PU-PET displays the location of the tumor or tumors that carry the epichaperome and therefore are likely to respond to the drug. Additionally, when its given along with PU-H71, PU-PET can confirm that the drug is reaching the tumor.

This research fits into an area that is sometimes called theranostics or pharmacometrics, Dr. Dunphy says. We have found a very different way of selecting patients for targeted therapy.

He explains that with traditional targeted therapies, a portion of a tumor is removed with a biopsy and then analyzed. Biopsies can be difficult to perform if the tumor is located deep in the body. Additionally, people with advanced disease that has spread to other parts of the body may have many tumors, and not all of them may be driven by the same proteins. By using this imaging tool, we can noninvasively identify all the tumors that are likely to respond to the drug, and we can do it in a way that is much easier for patients, Dr. Dunphy says.

The researchers explain that this type of imaging also allows them to determine the best dose for each person. For other targeted therapies, doctors look at how long a drug stays in the blood. But that doesnt tell you how much is getting to the tumor, Dr. Pillarsetty says. By using this imaging agent, we can actually quantify how much of the drug will reach the tumor and how long it will stay there.

Plans for further clinical trials of PU-H71 are in the works. In addition, the technology reported in this paper may be applicable for similar drugs that also target the epichaperome.

See more here:

Bulls-Eye: Imaging Technology Could Confirm When a Drug Is Going to the Right Place - On Cancer - Memorial Sloan Kettering

Recommendation and review posted by Alexandra Lee Anderson


Page 11234..»