SHOOTING THE VECTOR: NEW TECHNOLOGIES AND THEIR USES TO SUPPRESS A DEADLY ANIMAL

By Tessa Smith, 7th October 11:30 AM

Like all good netizens, I have at least once gotten into an argument with strangers on the internet.  The argument in question was about whether it was ‘ethical to purposely drive a species (in this case, a mosquito) to extinction’. I argued that it was not ethical, on the basis that the species has the same inherent right to survive as all other species, including the charismatic ones. The other parties (for there were many) argued that I was a privileged idiot with no empathy for the millions of people worldwide affected by the diseases. They probably had a point, but ethics aside, it got me thinking about 1) what the problem with the mosquito was, 2) what was being developed to supress it, and 3) how likely it was that the species could become extinct as a result. Thanks to Emily Flies for her helpful suggestions to the piece.


1. The Problem

According to the World Health Organisation, mosquito-borne diseases cause several million deaths and hundreds of millions of infections each year. Mosquitoes of the genus Aedes, Anopheles and Culex are the primary culprits for transmission of infections (e.g. malaria, Ross River virus). In these genera, the female mosquito feeds on blood from a host, using the protein and iron in the blood to grow her eggs. By feeding on several hosts (from the same or different species), they may transmit the blood-borne diseases if present.

A.aegypti

Figure 1: Aedes aegypti mosquito, E. A. Goeldi (1905).

Aedes aegypti (Figure 1), which is a vector for multiple diseases and lives in urban areas, thereby threatening over half of the world’s population. They are one of the most important species for transmitting disease for two main reasons: 1) they feed during the day (from sunrise to after sunset) when people are outdoors and active, giving them more opportunities to infect people, and 2) they lay their eggs in small containers of water around urban areas, making it very hard to target control efforts to breeding sites. While not the only insect vector for the multiple diseases it can spread, A. aegpyti is by far the most common and of the most concern. Some of the diseases transmitted by Aedes aegypti are:

  • Yellow fever is endemic in Africa and South America where its primary reservoir is monkeys. The disease can cause death in a small percentage of infected people and there is a vaccine available. During 2017-2018 there were deadly outbreaks of the disease in Brazil and Nigeria.
  • The Zika virus has been recorded in over 86 countries in the Americas, Africa, Asia and the Pacific. While causing few symptoms from infection in most people, it can cause complications of pregnancy and malformation in babies. Those infected also suffer an increase risk of neuropathic conditions. There is no vaccine currently available for Zika.
  • Chikungunya is present in Africa, Asia, the Americas and Europe. It causes fever and joint pain in affected individuals and there is currently no vaccine for the disease.
  • Dengue is most prevalent in Asia, the Americas and Africa, with almost half of the world’s population at risk (Figure 2). The disease may have been underreported in Africa due to a range of factors including presence of similar illnesses (11). The disease disproportionately affects people in developing countries (especially indigenous people, people of colour and immigrants) (10).There is currently no vaccine for Dengue, and the vaccine discovery process proving difficult. Aedes mosquitos are not a vector for malaria.

Fig2

Figure 2: World Health Organization Estimated Deaths from Dengue per million persons, 2012. Yellow=0, Red=9 (CC BY-SA 4.0).

Here, I focus on the research aimed at suppressing the ability of this species to transmit disease.

The spread of Aedes aegypti away from its native home in Africa and to a large area of the world, especially tropical and subtropical areas is also a major concern. The distribution of the mosquito in some areas of South America has been attributed to ship dispersal during the slave trade in the 1600’s (13) and more recently the trade in used car tyres (WHO Dengue Control). 

Controlling the spread and reproduction of these mosquitoes is critical for reducing disease transmission. A variety of historical control methods have been used against Aedes mosquitos, some of which have caused widespread environmental problems:

a) The mosquito was nearly eradicated in the Americas in the 1960’s using DDT (the chemical Dichlorodiphenyltrichloroethane), but re-established itself there when the project was discontinued (7) due to strong health and environmental concerns.

b) Biological control of the mosquito with fish or predatory copepods has had limited local success in controlling mosquito numbers and has caused ecological problems when the exotic species escape and consume indigenous fauna. The mosquito fish (Gambusia holbrooki) was listed as one of the IUCN’s 100 most harmful invasive exotic species after being released around the world to eat mosquito larvae.

The challenge of combatting the spread of mosquitoes is also influenced by other factors. The cost of effective management programs using insecticides is ongoing, and often a burden for poorer governments. Many mosquito species are evolving insecticide resistance, making insecticides increasingly ineffective. With climate change there is a predicted increase in A. aegypti’s global distribution (Figure 3) as it establishes in areas that were previously too cold to inhabit (3).

Fig3

Figure 3: Summary of the modelled global distribution of Aedes aegypti under both current (dark blue) and future (dark orange) climatic conditions in 2050 showing stability of predictions at present and into the future with RCP (representative concentration pathway) 4.5 (3).

Traditional methods of controlling Aedes and its disease spread through chemical and biological control are increasingly inneffective and unsustainable, necessitating the use of new methods for control (4). 

2. New options in the control of mosquito-borne disease

Wolbachia bacteria

Wolbachia is a genera of bacteria that occur naturally in 60% of all insect and nematode species, including some mosquitos. The bacterium modifies the sperm of the host insect, only allowing eggs infected with Wolbachia to develop normally (8). The presence of the bacteria within the mosquito has also been found to reduce the ability of the mosquito to transmit Dengue virus to humans.  

fig4

Figure 4: The Percentage of mosquitos with Wolbachia, Townsville, Australia, 2015-2016 (6).

The introduction of the Wolbachia bacteria into A.aegypti mosquitos has been successful in local trials in northern Australia since 2011 (Eliminatedengue.com). Once introduced, Wolbachia is self sustaining within the local mosquito population (Figure 4). Modelling by Dorigatti, McCormack (4) predicted that infection with Wolbachia reduces a mosquito’s ability to transmit dengue by 40% .

Genetic modification and gene drive

Another method for reducing infection rates is the release of genetically modified Aedes into the wild population. Unlike normal inheritance where an altered gene is not always spread, with gene drive inheritance, the altered gene is always inherited; male mosquitos carry a dominant lethal gene, that is passed on when they mate with females and kills 96% of their progeny (12). Modified Aedes aegypti mosquitos have so far been released into Burkina Faso and Brazil where they have been able to suppress local populations (4). This process requires the release of many modified mosquitos over several months (9).

Non-gene drive genetically modified mosquitos released in Brazil were found to have passed portions of their DNA into local populations, forming viable hybrid individuals that bred to pre-release numbers within months (2). The establishment of a genetic monitoring program (2) was recommended as an important part of the development process for any new genetic tool to ensure success.

3. How likely is it that the mosquito will become extinct as a result?

In a laboratory population, gene drive was successfully used to eliminate a population of Anopheles gambiae mosquitoes (a species that spreads Malaria)(5). The likelihood of gene drive and Wolbachia to create a complete extinction of the Aedes aegypti species over their entire range is low, but possible; models suggests that non-random mating will likely prevent gene-drives from causing extinction (1). However, this may be investigated by future research teams.  

All posts are personal reflections of the blog-post author and do not necessarily reflect the views of all other DEEP members.

References:

  1. Bull J. J., Remien C. H., Krone S. M. Gene-drive-mediated extinction is thwarted by population structure and evolution of sib mating. Evolution, Medicine, and Public Health. 2019; 2019(1): 66-81.
  2. Evans B. R., Kotsakiozi P., Costa-da-Silva A. L., Ioshino R. S., Garziera L., Pedrosa M. C., Malavasi A., Virginio J. F., Capurro M. L., Powell J. R. Transgenic Aedes aegypti Mosquitoes Transfer Genes into a Natural Population. Scientific Reports. 2019; 9(1): 13047.
  3. Kamal M., Kenawy M. A., Rady M. H., Khaled A. S., Samy A. M. Mapping the global potential distributions of two arboviral vectors Aedes aegypti and Ae. albopictus under changing climate. PloS one. 2019; 13(12): e0210122.
  4. Dorigatti I., McCormack C., Nedjati-Gilani G., Ferguson N. M. Using Wolbachia for Dengue Control: Insights from Modelling. Trends in parasitology. 2018; 34(2): 102-113.
  5. Kyrou K., Hammond A. M., Galizi R., Kranjc N., Burt A., Beaghton A. K., Nolan T., Crisanti A. A CRISPR–Cas9 gene drive targeting doublesex causes complete population suppression in caged Anopheles gambiae mosquitoes. Nature Biotechnology. 2018; 36: 1062.
  6. O’Neill S., Ryan P., Turley A., Wilson G., Retzki K., Iturbe-Ormaetxe I., Dong Y., Kenny N., Paton C., Ritchie S., Brown-Kenyon J., Stanford D., Wittmeier N., Anders K., Simmons C. Scaled deployment of Wolbachia to protect the community from Aedes transmitted arboviruses [version 1; peer review: 1 approved, 1 approved with reservations]. Gates Open Research. 2018; 2(36).
  7. Hotez P. J. Zika in the United States of America and a Fateful 1969 Decision. PLOS Neglected Tropical Diseases. 2016; 10(5): e0004765.
  8. Jiggins F. M. Open questions: how does Wolbachia do what it does? BMC Biology. 2016; 14(1): 92-92.
  9. Carvalho D. O., McKemey A. R., Garziera L., Lacroix R., Donnelly C. A., Alphey L., Malavasi A., Capurro M. L. Suppression of a Field Population of Aedes aegypti in Brazil by Sustained Release of Transgenic Male Mosquitoes. PLoS neglected tropical diseases. 2015; 9(7): e0003864-e0003864.
  10. Hunter P. Tropical diseases and the poor: Neglected tropical diseases are a public health problem for developing and developed countries alike. EMBO reports. 2014; 15(4): 347-350.
  11. Bhatt S., Gething P. W., Brady O. J., Messina J. P., Farlow A. W., Moyes C. L., Drake J. M., Brownstein J. S., Hoen A. G., Sankoh O., Myers M. F., George D. B., Jaenisch T., Wint G. R. W., Simmons C. P., Scott T. W., Farrar J. J., Hay S. I. The global distribution and burden of dengue. Nature. 2013; 496: 504.
  12. Phuc H. K., Andreasen M. H., Burton R. S., Vass C., Epton M. J., Pape G., Fu G., Condon K. C., Scaife S., Donnelly C. A., Coleman P. G., White-Cooper H., Alphey L. Late-acting dominant lethal genetic systems and mosquito control. BMC Biology. 2007; 5(1): 11.
  13. Mousson L., Dauga C., Garrigues T., Schaffner F., Vazeille M., Failloux A.-B. Phylogeography of Aedes (Stegomyia) aegypti (L.) and Aedes (Stegomyia) albopictus (Skuse) (Diptera: Culicidae) based on mitochondrial DNA variations. Genetical Research. 2005; 86(1): 1-11.

UNDERSTANDING BIAS-VARIANCE TRADE-OFF WITH ILLUSTRATION

By Kasirat Turfi Kasfi, 26th August 11:30 AM

When I sat down to brainstorm a topic to write for the DEEP blog, I thought what would be that one thing that interest me and the rest of the DEEP members? Data, right! We all work with data, we want to find meanings and patterns in our data. We want to be able to make inference, make decisions based on or make predictions from our data. We do that by using statistical learning methods.

I believe that most of us are familiar with one or more statistical learning models. And there seems to be quite a few of them out there! It can be difficult to decide which one to use! Or knowing which one is the best! There is no best method that fits all kinds of data perfectly, and no one method “to rule them all” (only if Tolkien invented the models!). I would therefore like to explain intuitively the bias-variance analysis that helps us to understand if a model is going to capture the true pattern in the “seen” data and will also generalize well to “unseen” data, thus help us in selecting the best model for the given data.


Before diving into what “bias and variance trade-off” means, let’s get a little background on “statistical learning”. Any supervised statistical learning model will try to find a hypothesis function ħ (i.e. mathematical representation) that approximates the relationship between the predictors (independent variables) and the response (dependent variable). The measure of how well the hypothesis function ħ is fitting the given data can be found by getting the error E between the output of the hypothesis function ħ and the output of the target function ƒ that describes the data. It is called an error because it represents the gap between the hypothesis function and the target function. The smaller the value of the error E, better the learning, meaning that the hypothesis has approximated the target function ƒ well. Therefore, there are two objectives: a) finding a good approximation of target function f, and b) the approximation holding for out-of-sample data. A more complex (meaning bigger) hypothesis set has a better chance of approximating the target function, because it is more likely to hold the target function in the set, but it becomes increasingly harder to find that needle in the haystack! On the other hand, if the hypothesis set is simpler (smaller) then it may not hold the target function in it, but luckily if it does hold the function then it is easier to find. In order to find the best candidate function in the hypothesis set, the hypothesis set must be navigated through the means of the sample data provided, which is the only resource in finding one hypothesis over the other.

Bear with me, I will soon get to an explanation of what I am talking about with an illustration! Just two more paragraphs to go!

Now, getting back to bias and variance, these two entities are inherent properties of a learning model. Mathematically, when the error term E is decomposed, we get bias and variance [2]. Simply put, the trade-off is between approximation and generalization, between bias and variance. The total error term E measures how far the hypothesis function ħ learned from the given data is from the target function ƒ. Of the decomposed entities, bias is a measure of approximation ability, it measures how far the best approximation is from the target function ƒ and the variance is a measure of how far the hypothesis function ħ learned from a dataset is from the best possible candidate function that could be obtained from the hypothesis set H. The hypothesis set H chosen is dependent on the data that is provided, so a different set of data will give a different hypothesis set to choose from. (This dependency is very important in the bias-variance analysis.)

The trade-off is that if bias goes up, then the variance goes down, or if the bias goes down the variance goes up.  If the hypothesis set H gets bigger, the bias gets smaller, getting it closer to f, but then there is a greater variety to choose from for the function which increases the variance. Below is the graph [1] of error E, and the relationship between bias and variance, this relationship is independent of the data, and holds for any statistical learning model. Model complexity is equivalent to hypothesis set size, meaning it holds functions with greater complexity.

image-1

Finally, if readers are still with me, here is an example with illustration!

Let me explain this trade-off using an example. Imagine that we have to find a target function ƒ. In real-life the target function is what we find by learning, but here for the purpose of illustration let’s assume we know the target function. Assume the target function ƒ is a sinusoid (displayed with orange line in the following figures for the rest of the article). Our objective is to find the best approximation of the target function ƒ given some data points. Also assume that we are using two hypothesis sets namely H0 and H1, where H0 is the constant model, and H1 is the linear model. Again, assume for illustration purpose that we only have these two hypothesis sets, to keep things simple! Both hypothesis sets will give an approximate function and we compare which one is better using the bias-variance analysis.

We start off with approximation, before doing any learning. The H0 hypothesis set should only give constants, and the H1 hypothesis set only gives lines. In the two graphs below, the light grey shaded regions represent all the possible functions (constants and lines) the H0 and H1 model can generate from the range of data points that are available. The shapes of the grey shaded region are a result of the model complexity and the available data points. The olive lines are the mean of each of the hypothesis sets representing the best of that hypothesis set.

The linear model will choose the function that will get most of the data points it possibly can. And the constant model will be better off choosing zero, as the error will be squared. As expected, we can see from the figures below, (the shaded area showing the errors), that clearly the linear model is the winner, it is a better approximate as the error is the smaller of the two, in fact for the constant model, all of it is an error!

Now, let’s look at the generalizing ability of the two models. Let’s say we have only two example data points (yes, we are stingy!) that we will use to approximate the sinusoid using the H0 and H1 hypothesis sets. The first figure below shows the points. The second shows the points with the target function. The third shows the points fitted with approximate functions from the hypothesis sets that best fits the data points provided.

image 4

Let’s bring back the target function and see how much error we get for the constant and the line. As per expectation the error is smallest for the linear model.

The constant and the line we have here are dependent on the data set, if we had another two points then the approximation of the constant and the line would be different. From a learning perspective, which model is generalizing well? Until now we have been looking at a subset of the dataset that defined the target function (the sinusoid). Now if we look at more datapoints from the population set, we are in for a surprise!

For all the unseen data points stretching infinitely before and after the datapoints we were working with, the constant model at least makes the right predictions periodically, but for the linear model it is a complete disaster! The variance error in the constant model stays constant, whereas the variance error for the linear model keeps increasing with more datapoints. In conclusion, the bias-variance analysis helped us to figure out, that for this particular target function (a sinusoid), given a set of data points and choices between a constant and a linear model we will be better off choosing the constant model that will not be the best approximation of the target but will be the best generalized model. In this case we traded off approximation (bias) for generalization (variance).

All posts are personal reflections of the blog-post author and do not necessarily reflect the views of all other DEEP members.

Reference & Source:

[1] The figure taken from http://scott.fortmann-roe.com/docs/BiasVariance.html

[2] James, G., Witten, D., Hastie, T. and Tibshirani, R., 2013. An introduction to statistical learning (Vol. 112, p. 18). New York: springer.

[3] All graphs plotted using Matplotlib.Pyplot

NUCLEAR, RENEWABLES, OR BOTH?

NUCLEAR, RENEWABLES, OR BOTH?

By Barry Brook, 29th May 3:00 PM

I’ve long been interested in sustainable energy systems. In 2008 I co-wrote a popular book on nuclear energy, have authored lots of data/modelling refereed papers on energy systems, and used to run a popular blog on climate and energy which ended up getting over 5 million hits during its lifetime. You might have noticed that I don’t talk about energy all that much these days, mainly because I’ve tired of the public debate and instead concentrated my focus on conservation biology.

But I occasionally still dip my toe in the murky waters of the debate. Recently, Robyn Williams, host of ABC radio national’s The Science Show, was down in Hobart and interviewed me on the topic. Below is a transcript. You can also listen to it (go here), my section starts at the 11 min 46 second mark. By the way, the preamble rant by Helen Caldicott to start the segment is a nuclear-powered version of Gish Gallop, so listen with due scepticism. If you have any specific questions, post them below (or, for those in my research group, come and ask me, or slack it on random!).The Science Show


Robyn Williams: Professor Barry Brook at the University of Tasmania in Hobart has long supported the nuclear option. But even he says it’s a long way off.

Barry Brook: I think there’s a good chance that nuclear is an option in the long-term, but not in the short-term in Australia, simply because time taken to prepare the ground, not just physically for the infrastructure but in terms of getting public support and political support, is a long-term goal and probably more than a decade away at least.

Robyn Williams: But do you think in terms of climate change and doing its bit there as part of a number of different sorts of responses that it has a place?

Barry Brook: Yes, I think so, definitely, because it provides a zero carbon alternative to coal. And what the renewable energy sources like solar and wind don’t offer is that baseload electricity which is sometimes dismissed as being irrelevant but it is really important and provides a stable underpinning to the grid. So I think that is nuclear’s role. Whether it can do more than that or not is really a matter of politics and economics. It certainly could do the whole job but it could also work as part of an energy mix quite effectively.

Robyn Williams: Are there two sorts of nuclear power station now after Chernobyl, which is terribly old-fashioned and weird, and Fukushima, and Sellafield, which again was pretty ancient?

Barry Brook: Those reactors you mention were actually quite diverse. Chernobyl was a very different reactor to Fukushima, for example. The ones that are built today or even in the 1970s, I mean, Western countries were called light-water reactors, there is also heavy-water reactors that are popular in places like India and Canada. I think the more useful distinction today is between large monolithic reactors and small modular reactors, the latter being an alternative that could be much cheaper, certainly per power plant because they are much smaller and therefore potentially faster to deploy, and could be more convenient because instead of requiring a grid load of many gigawatts, you only need perhaps a few hundred megawatts or even less. And so they are feasible in a much wider range of circumstances.

barrys-blog

Figure by Sonomi Brook: Nuclear waste generated from traditional uranium fuel cycles versus generation IV Integral Fast Reactors. For more details see: https://bravenewclimate.com/category/climate-change/

Robyn Williams: Wouldn’t you need an awful lot of them in a biggish country like Australia?

Barry Brook: Yes but you could also argue you’d need an awful lot of wind turbines or solar panels. So the smaller the output of a single generating unit, the more you are obviously going to need. But you can quite reasonably concentrate these modules, as they’re known, in power parks. So, for example, you could develop the infrastructure to house a dozen or even 50 of these modules at a single power park to produce large amounts of electricity from a single site if that is what’s desired. A key advantage of them over a large plant is that as soon as you start to install modules they can generate electricity, whereas you’ve got to build the whole plant for a large one before anything comes out of it, so that delay can be a decade or more until you’ve got electricity, where, at least in theory, the small modular reactors could be built and dropped into site, taken from factories through a rail system to the site extremely rapidly.

Robyn Williams: And what do they cost? Can we afford them?

Barry Brook: Well, they are probably cost competitive with coal. However, that’s untested. Small modular reactors have been used for many decades in the military where cost wasn’t a particular consideration. But developing a commercial one that could be built in a factory and rolled out to customers is something that a number of vendors are pursuing but no one has got there yet. The closest is a company called NuScale in the US which are going through licensing with the Nuclear Regulatory Commission as we speak. And should they complete that, then they could well be the first to market, and their goal is to have it cheaper than coal, and that remains to be seen. But if that was the case then that would become I think a very attractive option for a country like Australia to import.

Robyn Williams: And what if you’re a villain, what if you’re a terrorist and you want to blow them up, what would be the effect?

Barry Brook: Well, it would be very difficult to not only penetrate the perimeter of one of these power plants and get to the reactor but also to overcome the inherent safety systems that are built into them. An advantage of small modular reactors is there are a lot of principles of physics that you can bring to bear to make them inherently safe rather than safe due to engineered systems. So it can be very difficult to disrupt. Another advantage of small reactors are they can be buried underground, so if you imagine almost like a concrete bunker and a terrorist having to try and penetrate those would have an extremely difficult job. Terrorists have never been able to penetrate any old nuclear reactor, nuclear power plant to blow it up, and these would be much more difficult than that, so it seems an unlikely proposition.

Robyn Williams: As we said, you said in the beginning that the interest is just at the moment in Australian politics and society not very great. We’ve got Mark Latham wanting them. Scott Morrison has said it’s not part of Liberal Party policy. So do you wish it were acceptable more in a political sense?

Barry Brook: Yes, I do wish it was more acceptable. I think the only way nuclear will end up getting built in Australia is if you have bipartisan support. That doesn’t mean to say you’ll get the support of all parties, but really all you need is Labor and the coalition to jointly agree that this is a reasonable investment in Australia’s future energy infrastructure and then it can happen. But I think that decision would have to be catalysed by a number of other events that haven’t yet occurred, and the biggest of those will be if you can’t build new coal fired power stations because of public outcry, and that seems probable. If the cost of gas goes up such that they are infeasible for providing baseload electricity. And if the combination of large-scale renewables, principally wind and solar, along with some form of effective energy storage proves to be economically or socially or environmentally unacceptable, coupled with an increasing threat of climate change and potentially more impacts, more extreme events and so forth, that combination could lead the Australian political landscape to change enough to support I think some joint venture maybe between government and industry to build a future generation of nuclear power plants here. But again, in that framing it’s more likely to be a 10- or 20-year prospect before there is any excavation or concrete poured on such a project.

Robyn Williams: Barry Brook is a professor at the University of Tasmania in Hobart.