Facebook PixelWhat are the alternatives to using mice and other animals in scientific research?
Brainstorming
Brainstorming
Create newCreate new
EverythingEverything
Sessions onlySessions only
Ideas onlyIdeas only
Brainstorming session

What are the alternatives to using mice and other animals in scientific research?

Image credit: Pogrebnoj Alexandroff

Loading...
Muhammad M Rahman
Muhammad M Rahman Feb 09, 2021
Within academic research, mouse models are considered the gold standard for research. Using mice has always been the key to getting your work published in the elite, high impact factor journals that open many career paths for ambitious scientists. It is engrained in research culture – reviewers believe mouse models to be the closest representation of human testing as we share approximately 85% of genes. But are mice so similar to humans? How confidently can we extrapolate the data from mouse studies to human? There are some occasions where it can be argued that it is safer to test on mice and of course, we should not diminish some of the ground-breaking discoveries made using mouse models but should we still use such models given the advancement in technology?

One such example are 3D skin models also known as organotypic cultures. These are artificial models where pieces of skin are generated using individual components such as collagen, fibroblasts and keratinocytes. The collagen is mixed with fibroblasts which mimics the dermis then, keratinocytes are grown on top. The artificial skin is grown with an air-liquid interface where the dermis is submerged in culture media and the keratinocytes are left dry on the top – this allows them to differentiate and form the various epidermal layers. How do we trust such a model to be biologically accurate? Back in 1998, studies showed that organotypic skin models could be assimilated when grafted into mice, in 20 weeks. Yes, mice were used for this particular study but what it shows is that you could substitute mice with organotypic models to study skin. Indeed, following up on such studies has resulted in the use of de-epithelialised dermis (DED) whereby a burns victim will have a biopsy taken and the epidermis layer removed leaving just the dermis base. Then, various cell types are like dermal papilla (hair follicle cells) are added and grown artificially before being grafted to the burns victim. This results in wound healing and hair growth in the burns victim.

This is just one example of how we can now consider organotypic skin models to be a viable alternative to using skin from mice. What about other artificial models and their relevance to human biology? How reliable are other technologies like organ-on-a-chip models or tumour spheres? Let us discuss artificial models and animal models, the advantages and disadvantages of these, the research culture, the ethics and more.

[1]Berning M, Prätzel-Wunder S, Bickenbach JR, Boukamp P. Three-Dimensional In Vitro Skin and Skin Cancer Models Based on Human Fibroblast-Derived Matrix. Tissue Eng Part C Methods. 2015 Sep;21(9):958-70. doi: 10.1089/ten.TEC.2014.0698. Epub 2015 May 7. PMID: 25837604.

[2]Kolodka TM, Garlick JA, Taichman LB. Evidence for keratinocyte stem cells in vitro: long term engraftment and persistence of transgene expression from retrovirus-transduced keratinocytes. Proc Natl Acad Sci U S A. 1998 Apr 14;95(8):4356-61. doi: 10.1073/pnas.95.8.4356. PMID: 9539741; PMCID: PMC22493.

[3]Ojeh N, Akgül B, Tomic-Canic M, Philpott M, Navsaria H. In vitro skin models to study epithelial regeneration from the hair follicle. PLoS One. 2017 Mar 28;12(3):e0174389. doi: 10.1371/journal.pone.0174389. PMID: 28350869; PMCID: PMC5370106.

10
Creative contributions

Computer (in silico) modeling

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Feb 09, 2021
Computer models that simulate human biology/ physiology are used to study the progression of developing diseases and the effect of therapeutics on these diseases. Studies have shown that such models can accurately predict the ways in which the new drugs will react in the human body. Quantitative structure-activity relationships (QSARs) are techniques that can estimate a chemical’s hazardousness, based on its similarity to other known chemicals and the existing knowledge of human physiology.

[1]Martonen T, Fleming J, Schroeter J, Conway J, Hwang D. In silico modeling of asthma. Adv Drug Deliv Rev. 2003 Jul 18;55(7):829-49. doi: 10.1016/s0169-409x(03)00080-2. PMID: 12842603.

[2]https://www.sciencedirect.com/topics/pharmacology-toxicology-and-pharmaceutical-science/quantitative-structure-activity-relationship#:~:text=Third%20Edition)%2C%202014-,QSAR,back%20to%20the%20nineteenth%20century.

Loading...
Muhammad M Rahman
Muhammad M Rahman3 months ago
This is the way that a lot of research is heading with a focus on bioinformatics and machine learning however, there has to be validation using cellular systems. One thing to consider is that when you add drugs to cells, you get different responses based on the cell type and even the sub-population i.e. the stem cell may not respond like differentiating cells. If you then try and simulate cancer cell response you have to factor in the multiple mutations for those cells and also consider compensatory mechanisms that often make such drugs less effective than what was originally expected. The question then is which lab based cancer models could be used to test drugs.
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni3 months ago
Muhammad M Rahman I agree. We still need animal models for validating the results. However, the in silico models have reduced the dependency on animal models. Drug interactions can be studied in silico and only a few selected important interactions can be validated in the animals. This reduces the use of animal models. Moreover, the range of the magnitude of effects that you can choose in the in silico models far exceeds that of the real physiology and hence, the kind of effect that is possible in the in silico models may or may not be observed in the animal models but the effects that are not possible in the in silico models are impossible in the real scenario, too. This gives you an idea of the upper and lower limits of an effect in real physiology. So the in silico models help in there, too. If the observations from real life are beyond those that the model could predict, you know you are missing a crucial property from real data and need to add that to the model.

I agree that each cancer cell might have a different set of mutations. So, when you try to mimic them in silico, you need to set a probability of a cell having a mutation close to random. You can also set the type of mutation as random. So, basically, you start with the most random cancer cell design. Then you start tweaking the parameters as per the observations from human patients. For example, if colorectal carcinoma cells show a high probability of mutations that lead to the overexpression of TWIST1 (a transcription factor), you add that observation to your model. So, when you attach a random probability to all other mutations, you attach a higher probability to the TWIST1 overexpression mutants, which will be based on the proportion of observed TWIST1 mutations out of all the mutations observed in real data. This takes your model closer to the real one and the results from your model are, therefore, robust.

There is no limit to how closely you can mimic a cancer cell/ tumor; so you need to decide the level of complexity (adding more real-life parameters makes your model more and more complex) you want in your model depending upon the kind of questions you want to answer. For example, if you simply want to identify the range of the kind of mutations that are observed in breast cancer, a random cancer model with an initial set of epigenetic modifications observed in healthy breast tissue and a few major mutations to start with (like BRCA1 mutations) will suffice. On the other hand, most researchers consider colorectal cancer as a single unit. However, if you want to differentiate how colon cancer and rectal cancer will react to a certain drug, you may want the cancer cell models to precisely mimic colon and rectal cancers separately. Although both these cancers have minute differences, they may behave differently to a certain new drug; after all, their locations are different.

Human-patient simulators

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Feb 09, 2021
Life-like computerized human-patient simulators that perform a number of functions such as breathing, bleeding, convulse, talking, and even dying/ flat-lining have been constructed and are available for teaching and research. The high-end simulators even mimic illnesses and injuries and give the appropriate biological responses to medical interventions and therapy.

TraumaMan is a human torso that breathes and bleeds and has realistic layers of skin, ribs, and internal organs. It is also used to teach life-saving skills.

[1]https://caehealthcare.com/patient-simulation/

[2]https://www.peta.org/blog/countries-end-animal-labs-traumaman/

Loading...
Muhammad M Rahman
Muhammad M Rahman3 months ago
Excellent find, these models have helped in the advancement of medical training methods but as you have mentioned, the costs are very high.
Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia3 months ago
That's fascinating. I wonder what is the cost of this kind of equipment however. A big step in making a significant change in the use of animal models would be to make this kind of simulators available in most hospitals/med schools and I imagine that could be an issue.

I also wonder, what about anatomical variation, for example? Anyone who was studied practical anatomy knows very well that organs, veins, nerves and such are not always the same in terms of shape, size and position. Do these simulators account for that, so that students can learn practicing with different situations?
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni3 months ago
Manel Lladó Santaeularia I know, right! The prices are somewhere between USD 40k and 45k. (https://savvik.com/wp-content/uploads/2019/05/CAE-Healthcare-Pricing.pdf). According to this article (https://www.peta.org/blog/countries-end-animal-labs-traumaman/), more and more institutions are opting for the stimulators. They are widely used in medical schools and hospitals. I did not find anything on the anatomical variation in the products. However, I imagine there will be close to zero variation, since, it will increase the cost of each simulator.

Research using human volunteers

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Feb 09, 2021
A method known as “microdosing” is used to gain vital information regarding the safety of an experimental drug and the pathways by which it is metabolized in humans. This can be done before conducting the large-scale human trials. Volunteers at these microdosing trials are given an extremely small one-time drug dose, and the behavior of the drug in the body is monitored.

Additionally, advanced brain imaging techniques like functional magnetic resonance imaging (fMRI) can be used to safely study the brain to the level of a single neuron, and brain disorders can even be temporarily and reversibly induced using transcranial magnetic stimulation to study them.

[1]https://www.peta.org/issues/animals-used-for-experimentation/alternatives-animal-testing/#:~:text=These%20alternatives%20to%20animal%20testing,and%20studies%20with%20human%20volunteers.

The Digital Twin concept

Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia Feb 09, 2021
Although it may sound crazy, some prominent scientists in the field of computational biological modelling proposed in the last years to develop what they called Digital Twins. This concept would integrate machine learning and multiscale modeling to create a virtual replica of ourselves that allows us to explore our interaction with the world in real time. Think of it as a virtual mirror of ourselves that allows us to simulate our personal medical history and health condition using data-driven analytical algorithms and theory-driven physical knowledge. This Digital Twin would allow us to improve health, sports and education by integrating population data with personalized data, all adjusted in real time, based on continuously recorded health and lifestyle parameters from various sources.

Okay, this all sounds really good, but is it feasible? Yes, it should be. Over the past two decades, multiscale modeling has emerged as a promising tool to build individual organ models by systematically integrating knowledge from the tissue, cellular, and molecular levels. This gigantic effort in multiscale modelling seeks to predict the behavior of biological, biomedical or behavioral systems. For this, it is crucial to establish causal relations between the observed data, and try to find the small systems and interactions that, collectively, regulate what happens in a larger system. The ability to collect and store large quantities of data, as well as the development of complex machine learning, is the basis to be able to process these massive amounts of data, integrate it and analyze it, with the goal of identifying correlations and infer the dynamics of the overall system. Then, computationally simulated and experimentally measured features can be compared and this can be used to further refine the machine learning IA. This can be also coupled with theory-driven modelling in order to gain a broader spectrum of knowledge.

Well, I didn’t say that feasible was going to be easy, did I? While the knowledge and artificial intelligence necessary to develop a Digital Twin model are still far from being a reality, the necessary steps are being taken. Of course biological systems tend to be extremely complex and thus elucidating them can be a laborious task. But, if this kind of model could be generated, we could have a model of each patient, which would allow us to better decide personalized treatments by being able to predict the therapeutic outcome, side effects and even the effect of different doses or drugs. This would definitely revolutionize our concept of medicine.
But even more complex applications of this model come to mind. If this kind of model really was perfect, we could integrate hundreds or thousands of personal models and use them for a simulation of a clinical trial for a particular drug. These models could have all the possible range of characteristics we could imagine: different bodies, ages, health statuses, physiological variations… This would deliver extensive amounts of information, and would probably reduce the number of clinical trials by being able to predict whether a drug would be safe and effective, thus saving a lot of time and money in drug development. This means faster drug approval and lower costs (clinical trials cost millions of dollars that have to be covered by the drug cost).

And getting to the point of the session, this could also greatly reduce the need for animal experimentation. As we all know, not all drugs that are tested in animal models end up working in patients. While proof of concept demonstrations may need to occur in animal models before that data is integrated in the Digital Twin model, a lot of potential outcomes in patients could be predicted directly. That would lead to a drastic reduction of the need for animal experimentation for regulatory purposes.
Okay, you’ve convinced me. But what is the bad side of this? Well, the complexity of the machine learning and modelling necessary to build this would be enormous, and there would be a need to constantly integrate data to further refine the model. This means needing a lot of time and money in order to develop something like this. Will international organizations be interested in funding something like that? Will they believe it can really be as useful as it sounds? And will we succeed in making the Digital Twin a reality? Only time will tell.

[1]Alber M, Buganza Tepole A, Cannon WR, et al. Integrating machine learning and multiscale modeling-perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences. NPJ Digit Med. 2019;2:115. Published 2019 Nov 25. doi:10.1038/s41746-019-0193-y

[2]Bruynseels K, Santoni de Sio F, van den Hoven J. Digital Twins in Health Care: Ethical Implications of an Emerging Engineering Paradigm. Front Genet. 2018;9:31. Published 2018 Feb 13. doi:10.3389/fgene.2018.00031

Loading...
Muhammad M Rahman
Muhammad M Rahman3 months ago
You mention personalised treatment and that does exist in individual patient RNA sequencing whereby you might take a tumour biopsy of a patient along with normal tissue then run identify mutations in the individual. Furthermore, if biopsy tissue is cultured and directly tested with drugs then you treat the patient with drugs that you know work with the patient cells. This technology is here now but it’s expensive and not feasible. The Digital Twin model I believe will become a great resource but it depends on the richness of data. By that I mean that I would like to know how my biology compares to other people in my demographic whether that is ethnicity, height, weight, economic status and anything that may affect an individual’s health. To gather this type of information raises ethical issues with data collection and the potential conclusions that could be made.

What about the ethical considerations?

Loading...
Muhammad M Rahman
Muhammad M Rahman Feb 13, 2021
The most obvious ethical issue lies with the use of animal models and all will agree that there should be a move to reduce the use of animals in research but what about other models? Are there not ethical considerations for large scale genetic data collection and should there be any boundaries for the use of such powerful data? If an ethnic minority group are more susceptible to a certain disease, could this become a stigma?
Loading...
Povilas S
Povilas S2 months ago
Other ethical issues are also important to address. However, human data collection, invading personal privacy, etc. should be less of an ethical concern than physically torturing and killing animals. Avoiding this should be the priority, but sadly - it's not. Humans think they are superior to animals, human well-being comes first, therefore this is considered acceptable.

Speciesism is behind animal experimentation

Loading...
Povilas S
Povilas S Mar 10, 2021
This is a very important topic and a necessary initiative to look for alternatives. I want to point out one philosophical/moral argument which is usually not considered by the majority and which is arguably the main reason why animal experimentation was and still is justified and maintained. That is - speciesism or the belief that some animal species (including humans) are superior to others and therefore the well-being of the superior species justifies suffering of the inferior species.

This becomes very clear with higher animals and mammals in particular because their physiology is not so distant from human physiology and it's clear that they experience emotions and suffering in a similar manner that we do. It's clear both from a scientific perspective (similar nervous system, hormones, etc.) as well as from experience when one interacts with animals directly.

This is not to say that other (non-mammalian or non-vertebrate) life forms are ok to be treated as one pleases, just that with plants, insects, unicellular organisms, etc. it becomes much more difficult to discuss whether they feel pain, let alone emotions in a similar manner as we do (if at all). However, with mammals (which are used for scientific experiments most extensively) this argument is very sound. The only thing that is really superior in humans vs. other mammals is intellect.

Then the question arises - can a species with higher intellect be considered morally superior to those with lower? If so, should then humans with lower intellect (especially those with mental impairments) also be considered morally inferior? If the answer to the last question is no, then we are back to where we started - the main reason is the different species.

Humans are very anthropocentric and only very few truly think and act in ecocentric terms. In a broader picture of ecosystems and life in general anthropocentrism is a form of egoism. Upon closer inspection, it can be accounted for all ecological problems. It is a lack of respect for nature and other species. Human needs are always first. However, in reality, we are just a part of nature, and nature has its ways of sustaining balance, therefore we can't avoid feedback loops hitting back on us. A species that abuse natural resources and other species can be considered parasitic in a broader ecological perspective. This is something to think about in times of COVID-19.

Why animal models are necessary (so far)

Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia Feb 09, 2021
I really like this topic, thank you for creating this interesting session. In this contribution I'd like to pinpoint some of the reasons that research with animal models is still necessary and what we could potentially due to overcome that.

I'd like to start by commenting that, although mice (and rats) are the most used animal model in research, that doesn't mean they are "the closest representation of human testing". That is why, after testing in mice and before clinical trials, there are a lot of preclinical studies in models more relevant from a clinical standpoint. These models are closer to humans and allow us to test things we could not test in mice. There are several factors that are crucial in large animal models and cannot be adressed with the mouse:

  • Organ and Tissue Morphology: While the anatomical and histological basis of a lot of our tissues is relatively similar to that of the mouse, there are some major differences that have to be accounted for. A more similar animal model should reduce these differences as much as possible. To give an example, research focused on the retina has a major hurdle when working with mice: Their retina has a different composition than ours because they are night dwellers, while we live during the day. For this, while the composition of our photoreceptors is relatively similar, the structure of our retinas is very different. Humans have a macula (area with a high concentration of cones) and a lower frequency of rods, while mice have no macula and a very high frequency of rods. Something similar happens to the overall structure of the eye: the mouse eye has a very big lens for its size, and its small size only allows for injection of small volumes (1-3uL), while the human eye has a comparatively smaller lens and can receive injection volumes up to 300uL. This means that only on a large animal model we will be able to perform a surgery that is similar to what we want to perform on patients. For these reasons, for preclinical studies the dog, cat, pig and monkey eyes, which greatly resemble the human eye from an anatomical and histological standpoint, are used.
  • Immune System: The mouse immune system is very rudimentary when compared to that of higher primates. That's why a lot of preclinical studies where the immune system is relevant (like gene therapy with viral vectors, for example) are made with monkeys, in order to better predict the potential immune reactions. An example is the development of anti-AAV and anti-transgene antibodies found in patients of gene therapy. These were not found in most experiments in mice because their immune system doesn't develop them. However, non-human primate models develop a response that is more similar to that found in the patients involved in the first clinical trials. Even so, even non-human primates may not be a good enough model to predict all the immune responses that are found in patients.
  • Neuroanatomy and behavior: Different animals have evolved to follow different survival strategies and their brains reflect that. Small rodents have a neuroanatomy and behavior that cannot be compared to that of humans. Larger animals are more comparable to us in both neuroanatomy and behavior and thus a better model for translational applications.
  • Biodistribution: That is possibly the most important point of all. A body that is most similar to ours in its overall conformation (shape, size, complexity, organ functions and even cell receptors) will be a better model for biodistribution of a drug after we deliver it. We can see where and how the drug goes and what effects it has, and that gives us a clear idea of what we can expect in humans.

Going back to the main topic of this contribution, a lot of the points that I have mentioned cannot, as of yet, be adressed without a large animal model, and they would be impossible to adress without any model at all. It is true that we can use something like an organoid or a computational model in order to gain some information, and that could hopefully replace some of the animal experimentation in the near future. However, there is other essential information, especially the unexpected and dangerous effects, that so far cannot be obtained without an actual clinically relevant animal model. We all hope that computational models will evolve enough to substitute that, but a lot of information and development is still lacking before we get to that stage. Similarly, the development of synthetic organisms that allow us to test biodistribution and intervention techniques in a clinically relevant setting is still far from being a reality.

[1]Winkler PA, Occelli LM, Petersen-Jones SM. Large Animal Models of Inherited Retinal Degenerations: A Review. Cells. 2020;9(4):882. Published 2020 Apr 3. doi:10.3390/cells9040882

[2]Colella P, Ronzitti G, Mingozzi F. Emerging Issues in AAV-Mediated In Vivo Gene Therapy. Mol Ther Methods Clin Dev. 2017 Dec 1;8:87-104. doi: 10.1016/j.omtm.2017.11.007. PMID: 29326962; PMCID: PMC5758940.

Loading...
Muhammad M Rahman
Muhammad M Rahman3 months ago
Excellent points, thank you. Yes, it is near impossible to develop artificial models for to account for behaviour. I would also add that with animal models, long term tests can take place for months and even years which is currently not possible with any artificial model. However, with organ on a chip models developing, it is not beyond our reach to create models for a number of organs and somehow integrate them as one larger system.
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni3 months ago
All important points Manel Lladó Santaeularia ! I would like to stress the point of exposing unwanted and dangerous effects using animal models. In vivo experiments are useful to track the effects of a drug throughout the body of the organism. If a drug that is supposed to be good for the heart, in some way, damages the kidneys, then such an effect would not be observed in in vitro cell or even organ studies and may or may not be observed using the computational models. Moreover, animal behavior is intricately connected to their physiology. Every aspect of the behavior has some effects or causes in physiology. Studying the behavior, therefore, is also a good proxy for the changes happening inside the organism.
Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia3 months ago
Shubhankar Kulkarni Great point. Behavior is clearly not something that can be easily modelled because it comes from the interaction of the mind and the physiology, and thus is extremely complex and unpredictable. That is the field that I believe will take the longest to see significant substitution of animal experimentation.

However, the more we know about the different organs (i.e. cell receptors, physiology) and the mechanisms of action of drugs themselves, the easier it will be to generate in silico models that can accurately predict more and more the effects of drugs on different parts of the human body. The difficulty will probably fall into being able to integrate all of those models of different organs or different effects into one, which would immensely facilitate this kind of study (imagine having to run a simulation for each organ or for each compound of your drug). And still, this should go to extensive iteration (which would regardless need some animal experimentation to confirm the results obtained by the artificial intelligence) and would probably take a very long time. However, it would be worth it in the long term, especially because this kind of in silico model could probably mimick the human body in a way that other organisms are unable to, considering some of the differences I have already mentioned.

Clearly the most important thing in order to achieve something like that is to collect as much information as possible, from a biochemichal, molecular, physiological and clinical standpoint, and find a way to integrate and convert all this information into a proper model. Any voluntary brilliant minds?

Making humans in the lab on microchips?

Loading...
Muhammad M Rahman
Muhammad M Rahman Feb 10, 2021
The best way to know if your drug works in humans is to test in humans hence why all drugs need to be tested via clinical trials before approval. Before then, scientists are using animals, 2D and 3D models to test drugs and even then, the data will not be convincing enough to bypass the clinical trials stage. What if we could make artificial models for testing? Well we can’t but we now can create multiple organ systems that mimic complex human physiology known as organs-on-a-chip . These are microfluidic chips that are customised to suit any type of experiment for example, you want see how a drug interacts with the skin so you make a chip that incorporates keratinocytes (epidermis) and fibroblasts (dermis) in separate layers. You may then want to add sweat glands, hair follicle cells, blood vessel forming endothelial cells and so on to make this model as accurate as possible. This is not a case of mixing cells altogether, organ-on-a-chip involves 3D printing on a chip so the skin layers could be made to mimic human skin. What about mimicking skin elasticity? Cells, hydrogels and other biological materials can be bioprinted onto the chip. This makes for a very complex organ system where you can manipulate multiple factors including flow, pressure, oxygen levels as well as pH .

Now consider that scientists have successfully developed blood-brain barrier , bone marrow , intestine , liver , pancreas and heart chips then you start to form a human. Indeed, human-on-a-chip models are fast developing as studies have successfully combined 13 systems together for the purpose of drug testing . Approximately 40% of drugs fail clinical trials even after positive animal testing so would this percentage drop if we moved towards organ-on-a-chip models?

[1]1. Zhang, B. et al. (2018) Advances in organ-on-a-chip engineering. Nat. Rev. Mater. 3, 257–278

[2]2. Xia, Y. and Whitesides, G.M. (1998) Soft lithography. Angew. Chem. Int. Ed. Eng. 37, 550–575

[3]3. Duffy, D.C. et al. (1998) Rapid prototyping of microfluidic systems in poly(dimethylsiloxane). Anal. Chem. 70, 4974–4984

[4]4. Whitesides, G.M. (2006) The origins and the future of microfluidics. Nature 442, 368–373

[5]5. Bhatia, S.N. and Ingber, D.E. (2014) Microfluidic organs-on chips. Nat. Biotechnol. 32, 760–772

[6]6. Wevers, N.R. et al. (2018) A perfused human blood-brain barrier on-a-chip for high-throughput assessment of barrier function and antibody transport. Fluids Barriers CNS 15, 23

[7]7. Marturano-Kruik, A. et al. (2018) Human bone perivascular niche-on-a-chip for studying metastatic colonization. Proc. Natl. Acad. Sci. U. S. A. 115, 1256–1261

[8]8. Shim, K.Y. et al. (2017) Microfluidic gut-on-a-chip with three dimensional villi structure. Biomed. Microdevices 19, 37

[9]9. Ma, C. et al. (2016) On-chip construction of liver lobule-like microtissue and its application for adverse drug reaction assay. Anal. Chem. 88, 1719–1727

[10]10. Shik Mun, K. et al. (2019) Patient-derived pancreas-on-a-chip to model cystic fibrosis-related disorders. Nat. Commun. 10, 3124

[11]11. Ahn, S. et al. (2018) Mussel-inspired 3D fiber scaffolds for heart-on-a-chip toxicity studies of engineered nanomaterials. Anal. Bioanal. Chem. 410, 6141–6154

[12]12. Miller, P.G. and Shuler, M.L. (2016) Design and demonstration of a pump less 14 compartment microphysiological system. Biotechnol. Bioeng. 113, 2213–2227

[13]13. Van Norman, G.A. (2019) Limitations of animal studies for predicting toxicity in clinical trials: is it time to rethink our current approach? JACC Basic Transl. Sci. 4, 845–854

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni3 months ago
Great! This would certainly help reduce the number of animal experiments.

To extrapolate the idea further, we can combine the Digit Twin and this human-on-a-chip concept. What if we could develop organs that specifically mimic a person's organs? The DNA from the patient can be extracted and a human-on-a-chip can be constructed out of it. This will take us another step closer to treating the patient by targetting their own organs on the chips.

If human-on-a-chip is feasible, so will this idea be. There might be ethical issues though, I don't know. Also, how much time does it take to develop an organ on a chip? If there is a patient who needs medical treatment in a span of months, the organ construction should be achieved way before that.

3D hair follicle models

Loading...
Muhammad M Rahman
Muhammad M Rahman Feb 19, 2021
Hair growth studies were for a long time only conducted in animal models but are now being phased out. But how can you solve the problem of hair loss without testing in animals? How do the artificial models compare to animals? Traditionally the 2D models have focussed on trying to grow the hair follicle with dermal papillae cells but there has been limited success in that regard. 3D models have followed up to utilise more of the hair components such as hair follicle stem cells, fibroblasts and so on to improve the models with promising results . The key to hair follicle growth still remain elusive and admittedly the mouse models still present the best results in this regard although non-animal models are advancing. Is it better to continue with mouse models that yield the better results or to develop the 3D models further? Is it ethical to continue using mice to further hair growth research?

[1]Higgins CA, Richardson GD, Ferdinando D, Westgate GE, Jahoda CA. Modelling the hair follicle dermal papilla using spheroid cell cultures. Exp Dermatol. 2010 Jun;19(6):546-8. doi: 10.1111/j.1600-0625.2009.01007.x. Epub 2010 Apr 20. PMID: 20456497.

[2]Castro AR, Logarinho E. Tissue engineering strategies for human hair follicle regeneration: How far from a hairy goal? Stem Cells Transl Med. 2020 Mar;9(3):342-350. doi: 10.1002/sctm.19-0301. Epub 2019 Dec 26. PMID: 31876379; PMCID: PMC7031632.

Terminally ill patients

Loading...
Povilas S
Povilas S Mar 10, 2021
For patients who are terminally ill and have limited time to live anyway, testing a novel drug or treatment method targeting their disease is a nothing to lose situation. I bet the risk of poisoning or causing more suffering to the person before their time can be ruled out without using animals.
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni2 months ago
The idea does not seem ethical. And even less so given the direction and speed of medical research :)
Another problem with this idea is that the physiology and organ functionality has already deteriorated to an extent in terminally ill patients. Testing drugs in such patients will, therefore, surely induce biases. Usually, drugs work best and show the least side-effects when the patient is young and the disease is not aggravated. The results from terminally ill patients will, therefore, not be reproducible in the young patients with an early-stage diagnosis of the disease - the group that could benefit the most from the drug. For example, an anti-diabetic drug may be contraindicated for patients having sub-optimal kidney function. Therefore, a diabetic with multiple organ failure will not respond well to such a drug, inducing a bias in the results. Hence, most drug trials include a range of patients, agewise and degree of illness-wise. [1] The experiment will have to be repeated in the appropriate target group.

References:
1. Shenoy P, Harugeri A. Elderly patients' participation in clinical trials. Perspect Clin Res. 2015;6(4):184-189. doi:10.4103/2229-3485.167099
Loading...
Povilas S
Povilas S2 months ago
Shubhankar Kulkarni Yes, those are all good technical points, I didn't know much about it. But why it doesn't seem ethical? Nobody would force the patients to test the drug, this should be done upon their consent only. I think the data could be nonetheless useful, at least partially, no?:) Also, there is some chance for at least slight improvement of their condition, especially if the drug being tested is novel and is developed to have some advantages over older-generation drugs.
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni2 months ago
Povilas S I am not sure how the data can be used. Terminally ill patients may suffer from a range of problems. No two individuals may be alike with respect to their diagnosis. The benefit to one patient might come from not having a certain problem that has made the patient terminally ill. To continue with the example from my previous comment, type 2 diabetes is associated with retinopathy, neuropathy, and nephropathy, and also macrovascular conditions. An anti-diabetic drug that is supposed to be contraindicated for patients with suboptimal kidney function will not benefit the ones having severe nephropathy but might benefit those having other pathies. Also, a patient may have multiple pathies with differing intensities. Moreover, since there is no previous data on the effect of the novel drug on humans (since we have not conducted a proper clinical trial), we don't know whether the drug is contraindicated for patients with kidney failure. It will be like shooting in the dark and some arrows hitting the bullseye while some others not and not knowing which ones hit the bullseye and why. How can we associate benefits and side-effects to a condition? Also, when we treat only the terminally ill patients, the trial is no longer randomized, adding a selection bias at the inception. [1] Complications of a disease vary a lot and the intention of a randomized controlled trial is to minimize the variation to highlight the effect.

Regarding ethics: In the case of terminally ill patients, the physicians recommend only standard care that does not cause pain to the patients. Intensive and curative care is stopped. [2] Testing a novel drug, of which we know very little about, hence, seems unethical.

Here is another reason to avoid the idea: Prediction of "terminally ill" is not that accurate. In a study, it was observed that the physicians made an accurate prognosis in more than 90% of the cases of terminally ill patients when the death occurred within 7 days. For a longer period, their predictions became inaccurate. In the category of patients who were expected to die within 8-21 days, predictions were accurate in 16% of the cases, and in the category of patients expected to die within 22-42 days, the accuracy came down to 13%. [3] Drug studies very rarely test a single or only a few doses of the novel drug. Most clinical trials go on for at least 6 months and continue to gather data (follow-up) for years after that. About 7 days is a very less period to test a novel drug. We cannot test it in patients that may live beyond 7 days because the accuracy drops to 16% and the patients may live longer or recover altogether.

References:
1. https://derangedphysiology.com/main/required-reading/statistics-and-interpretation-evidence/Chapter%202.1.5/types-bias-medical-research
2. https://www.allaboutcancer.fi/treatment-and-rehabilitation/terminal-care/#5fd03ecc
3. Brandt HE, Ooms ME, Ribbe MW, van der Wal G, Deliens L. Predicted survival vs. actual survival in terminally ill noncancer patients in Dutch nursing homes. J Pain Symptom Manage. 2006 Dec;32(6):560-6. doi: 10.1016/j.jpainsymman.2006.06.006. PMID: 17157758.

Add your creative contribution

0 / 200

Added via the text editor

Sign up or

or

Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments