Challenges of polymer informatics and the driving force

news · 6 years ago
by Krisztián Niesz

Polymers are arguably the single most used materials in the modern age, however there are only a small number of useful informatics tools out there to work with polymers. According to expectations the world annual consumption is approaching 400 million metric tons(!) by 2016 (which is up by 70% from 2006), and the market sectors include anything and everything that you can possibly imagine: food, textiles, furniture, printing, plastics, machinery, electronics, transportation, construction, and so on. So, I think it is safe to say that our civilization depends on polymers.

Serving this huge and hungry market the Middle East plays an important role as the largest exporter with its expected net export of petrochemicals of 25 mmt (most commodity polymers are produced from crude oil via distillation, steam cracking and polymerization processes). On the other end of the chain there is China, which will remain the biggest importer of polymers with an expected net import of 16 mmt by 2016.1 Related to this, we all are aware of the rising cost of crude oil, and the fact how it is changing worldwide economics. As a result we need to act quickly and adopt new technologies to make new polymeric species (e.g. biopolymers) from renewable and sustainable ingredients. The other issues that need to be handled delicately are not only the environmental effects of the polymers themselves, but also the production processes of polymers. Polymers’ success leads back to their cheap manufacturing, non-toxicity, easy processing as well as their stability. The strength of chemical bonds that make them durable though is the exact same reason of why it takes hundreds of years to degrade them through natural processes. Within the last 60 years over 1 billion tons of plastics have served their purpose and then been thrown away (literally). If we don’t want to drown in our own trash we need focus on alternative ways to replace the currently used synthetic polymers with their biodegradable counterparts. We cannot wait for any longer with this, which obviously means that the “one-step-at-a-time” experimental thinking, common in material science, has to be replaced by the well-known and highly acknowledged high-throughput experiments developed for other areas, such as drug discovery. It is yet again demonstrated that pharma is way ahead of other chemical industries in this sense. The good thing though is that there are initiatives already put in place and working toward the solution, we just have to try to understand the nature of the problem and participate in solving it.

Who’s going to help in this revolutionary and quite a large-scale change?

Cheminformatics, we should say. Unfortunately though, cheminformatics is very much in its infancy here.2 Most importantly what’s missing is a common and appropriate description / representation of polymers, that are easily understandable to computers and could be used in the process of generating mash-ups with other interdisciplinary sources of information harvested from the world wide web. Connection tables that are used to accurately describe small molecules start to break down when it comes to heterodisperse systems, therefore, cannot be used for property prediction (important step in materials design). And polymers are such systems, they are collections of macromolecules. Furthermore, the desired technology would also allow the attachment of metadata (e.g. history of the polymer, production descriptions, and characterization details) to the above mentioned descriptions and it’s conversion to common and usable knowledge. In the conversion process semantic web is envisioned to play a significant role. Web 3.0 aims at converting the web of unstructured data in the form of documents into a web of usable data. The foundation to support this idea is already laid down in the form of using suitable markup languages (e.g. eXtensible Markup Language – XML). Translated into the “language” of the polymer world this means utilizing the Polymer Markup Language (PML), which was developed by Adams et al. based on CML.3 Although PML is far from being completed, it is a very good starting point to say the least. It already provides tremendous support for computer-aided design of polymers via determining a common structural representation and related information, overcoming some of the above-defined challenges. One thing is hard to overcome though, and that is not a technical barrier, but a cultural instead. Yes, I’m talking about scientific data sharing. As science these days is becoming more and more data invasive and collaborative in nature, and “Data are the infrastructure of science”, thus data sharing with peers for the greater good is more important than ever. Although things like privacy, IP rights, and commercial potential will always be obstacles in this aspect an increasing part of the scientific community realizes the potential advantages of working in a wide and collaborative environment, rather than in a competitive one that often leads to closed doors.4

As outlined here there is a lot to do, so back to work Cheminfomaniacs!

References: 1 http://www.accenture.com/SiteCollectionDocuments/PDF/ShiftingManufacturingPOVFINAL.pdf 2 Adv. Polym. Sci. DOI:10.1007/12 2009 18 3 J. Chem. Inf. Model 48:2118–2128 4 CCDC (2008) The Cambridge Crystallographic Data Centre. http://www.ccdc.cam.ac.uk/; Day NE (2008) CrystalEye. http://wwmm.ch.cam.ac.uk/crystaleye/index.html