Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Pre-2018 blog
#11
January 10, 2017

A high quality, low cost, non invasive Continuous Glucose Monitoring (CGM) based mainly on Raman spectroscopy, is presented. In addition a number of sensors provide information about patient’s context. The CGM re-calibrates itself automatically.

Designing non-invasive continuous glucose monitoring (CGM) is an incredibly complex problem that presents a number of challenging medical and technological hurdles. It is told that around 70 companies tried to bring non-invasive glucose monitoring devices to the market without any success.

Quality in our CGM proposal comes from the number of technologies used to increase measurement precision. The understanding of the biological operating context enables to accurately predict glucose values.

More information here: Glucose_monitor

Reply
#12
June 4, 2017

Not much to show, but some news:
Sounds files have problems that I did not anticipated. What I was expecting from the analysis of Physionet 2016 submissions was noise, spikes, weird amplitude and similar distortions of the signal.
What I found was different, there is little noise while you filter it a bit, there are few spikes.
However sometimes the signal is biased (more negative values than positive), the signal also appears to have little in common with textbooks, I can easily detect S1 and S2 events, but it is difficult to find S3 and S4.

When you hear the sounds, half of them looks weird, I am not a cardiologist, but I find it difficult to find what I could hear in a “textbook” heart sound.
This makes me think again about the Physionet 2016, successful submissions where mainly about heavily filtering, dealing with spikes with sophisticated algorithms and finding characteristics (features in ML slang) that encompass the whole file such as RR variability as in:

https://en.wikipedia.org/wiki/Heart_rate_variability​

Clearly my approach is different, I focus on what identify a heart beat, which is entirely new. But I still plan to implement the RR variability analysis and tied it to my HMM classifier which will become quite hybrid in the process.

Reply
#13
June 15, 2017

This is a quick description of our early detection of heart failure algorithm for feature identification and segmentation.

1) A sound file consists basically of a float array and sampling rate.
2) One normalizes this sound in amplitude (but we can do without) and in sampling rate (2000 times per seconds)
3) Contrary to what is done in Physionet 2016, there is no filtering or elimination of “spikes”.
4) The cardiac rhythm is detected through events that are roughly the S1 events (FindBeats.java). This is not trivial as there are noises, spikes, “plops”, respiration, abnormal heart sounds, human speaking and in one case even dog barks! Basically there are two ways to achieve the beat detection, one is to find the frequency of the envelope of the signal, the other is to make a FFT. But both approaches are subjectives as they imply to decide beforehand what is an acceptable heart rate. What does not help is that heart rate can roughly go from 30Hz to 250Hz, so as we can detect three or four main frequencies between 30Hz to 250Hz in the FFT of the sound file, which one is the correct one? One cannot decide in advance. What we do is having a two steps and indirect approach that fails quickly if we make a wrong guess. So it could converge quickly toward a result:
* In the first step we try to estimate the heart beat duration independently of the heart rate (because there are noise sources that make the heart rate counting unreliable).
* We use this first estimation to make a better guess of the heart rate in the second step. If the guess gives obviously wrong results we change the threshold for detecting heart rate.

4-1) One seeks to find the optimal detection level (threshold) enabling to detect heart beat rate in this sound file.
4-1-1) We create a window on the sound file, that is very low-pass filtered (currently 200Hz)
4-1-2) We register times of downward passage through the current treshold level
4-1-3) If a convincing heart rate (not necessarily exact) is obtained, the level of detection is kept, otherwise it is lowered and the procedure is repeated.
4-2) With this threshold, we will try to obtain a cardiac rhythm with roughly the same procedure as above but with a window filtered at 1000Hz and in a wide interval around the heart rate obtained in the first step.
4-2-1) We create a sound window which is low-pass filtered (currently 1000Hz)
4-2-2) A downward passage through the threshold is detected, ad hoc margins are imposed between the start and the end of the event.
4-2-3) If a convincing cardiac rhythm is obtained, the heart rate is kept, otherwise the level of detection is lowered and we start again

So we have here an approach which is very progressive, yet delivers results in a short time. It is also quite insensitive to sound events that could derail the heart beat counting, as the first step provide some good indications of the area where is the real heart beat, one spike makes the closer S1 perhaps not detected, but we gain a good idea of the heart duration.
5) In addition to S1, many other events are detected. A priori we assume that these are the other events S2, S3, S4. Even numbers going higher than four, it is useful for unusual heart sound classification.
6) These two Sx events detections are brought closer together
6-1) The list of S1 events is made more reliable, which makes it possible to deduce the events S2, S3, S4 (Segmentation.java).
6-2) A signature of the heart beat is computed to acknowledging there are more in one heart beat than its time of arrival and duration. We tested several schemes, and decided to use a Huffman compression of the heart beat. We had also the idea to use this to yet another kind of feature detection without training but it is not implemented at the moment because of lack of resources.
7) From there one can either train a HMM, or classify. We go out of cardiac specific, it’s just an HMM
8) One interprets the classification made by the HMM, with a score of similarity and comments on the events Sx

Another difference with Physionet 2016 is that they add a second approach, which is the variability of cardiac rhythm, they calculate a lot of indicators, but the HMM does it more accurately, with a probability of Transition from internal to internal state that is explained, instead of being a scalar computed over the whole file.

In a future version, I would like to work on the frequency peaks that can be identified with an FFT.
The general idea would be to look for what would be “apparently” harmonics but which in fact would indicate echoes on different paths.

Reply
#14
September 27, 2017

When it comes to studying heart sounds with algorithms, it is important to not be naive, technology is a tool, not an oracle. Mastering what we do is of utmost importance and having multiple informations helps to gain confidence in the outcome of algorithms.

One important information for segmenting the heart beat in sounds, is the heart rate, but it is much better to know approximatively when heart beats begins.
Heart beat events are easy to recognize with ECG because there is a very distinctive figure, the “R” peak which signals the contraction of ventricles.
However the ECG is difficult to sample and it does not give easily much other usable information. For example one has to interpret deflection of waves or figure out if missing beats are normal or not. Surprisingly our heart can miss beats, for example if we move during the sampling.
Phonocardiograms are easier to sample and they give more trusty information.

However automatically segmenting Phonocardiograms is not an easy task either. There is a well known challenge for heart specialists on this subject: Physionet/CINC and indeed there are the Kaggle challenges for the data scientists.

Phonocardiograms are usually obtained with Doppler ultrasounds or with electronic stethoscopes. But it could be sampled with ordinary microphones as well, providing that ambient sounds are not too loud.

As Webcams are often equipped with microphones and SBC like Raspberry Pi can use webcam out of the box I was thinking of using photoplethysmography to at least get the heart rate. This is an ongoing work but results are encouraging. I am using a Logitech 270 webcam.

The result of this study is intended to be included in my Hjerte software package that now runs on a Raspberry Pi.

Reply
#15
May 3, 2017

Six months ago we registered a new project on Hackerday and some other places.
https://hackaday.io/project/19685-early-and-low-cost-detection-of-heart-failure

The idea was to detect early the heart failure condition, because it is a condition that affects most of us as we age, and there was a lot of material online thanks to various challenges about this subject, like this one:
https://physionet.org/challenge/2016/

To create a proof of concept we used a low cost fetal doppler ($50) and a Linux box and were able to record heart sounds without using gel on an adult. So one of the requirement for medical devices was filled: To be ready to be used in seconds.

In most medical devices, there is an implicit requirement: To make the output understandable, it must offer an explanation of the medical statement. So using a black box ML à la Kaggle, is out of question.

In heart sound competition like Physionet 2016, they train HMM in order to create a statistical model of the heart sounds of some condition, HMM can “explain” their internal model by showing the probability a appearance of each state, for example the probability of arrival of a S2 sound at some time after a S1 sound in a particular sequence of heart sounds.
So a HMM model can be used to classify a new sequence of heart sounds either as quite similar to the trained model or not.

One might ask why not using deep learning as it seems to have made wide steps forward recently and as very nice software are available like TensorFlow.
There is a big internal difference between ML using CNNs à la TensorFlow and ML using HMMs, in “an ideal world” a CNN finds its feature without human intervention where a HMM needs that each observation is “tagged” with some human knowledge with a Viterbi or similar function. The tagging is part of what makes the resulting model understandable, however automatic tagging (as in unsupervised learning) is indeed hard.

In truth there is similarity between the design of successful CNNs and HMMs, they have a cost function, however CNNs cost functions do not create meaning.
Designing the cost function of a CNN or the Viterbi function of a HMM is the most important part of any ML setup. All claim that we heard about effectiveness of ML are due to the design of those functions, not to some fancy ML algorithm.
It is a very hard job, far above the state of art.

In order to circumvent this problem, most ML proposals use another ML setup to create the cost function as in most Physionet 2016 challenges or in a recent article that is highly considered in the domain of skin cancer detection: http://www.nature.com/nature/journal/v542/n7639/abs/nature21056.html.

Indeed if one uses ML to create the cost function, the resulting model becomes highly opaque, and medical policy maker, scientists, or specialists will find it useless or even dangerous.

On the long term this practice of using a ML derived cost function will be discouraged, but I suppose this is part of the current hype curve about “deep learning”. It is worse when using small signals ML like in Deep Forest algorithm, where it becomes impossible (today) to reverse engineer the ML model by disturbing it. In addition deep learning cannot be done with a $50 device, it mandates huge computing facilities.

So we created our own Viterbi function for our HMM, and it is quite efficient while being quite simple. The next steps are to improve it, make it more informative and able to move from the Linux box to a microcontroller. Stay tuned.

Reply
#16
October 26, 2017

You may have read this article, it is about self care by cancer patients that comb the scientific literature, find a description of a vaccine for their HLA type and convince a medical service to inoculate them with this vaccine.

We, at Padirac Innovation, gave some thoughts about this and came to an algorithm that takes in account the side effects, because as usual it is not enough to find a cure, the cure must also be bearable by the patient.

As for Hjerte, our heart failure detector, we wrote a project on Hackaday in order to document our work in this area:
https://hackaday.io/project/27871/details/

The easiest thing for us, would be to start writing a Java application, but soon enough move it to a web site for interested users to find and use it as a service.

Reply
#17
March 10, 2016

General Electric’s BiodMet is a Java PBPK simulator which is quite old now. It appears in 2008 and did not receive any improvements at least since 2013. Indeed there are ferocious competitors like Certara’s SimCyp and many others.

However it is still an impressive software, with a GUI providing a detailed simulation of many organs seemingly up to cell level. Indeed not everything is perfect, there is nothing about eyes, the circulatory system is very basic, there is only one lung or kidney, genital organs simulation is not implemented, but for a free to use software it is nevertheless awesome. When the software runs a simulation it says it uses 2126 ODEs equation, which is extremely impressive.

In order to test the veracity of this claim, we used the same approach as in last post. It turns out this claim is somehow true.

Actually the body itself is simulated with 52 equations (mostly for modeling blood usage by organs at cell level and modeling a few to model inter-organ fluid exchanges). There are also for each organ a set of 82 ODEs to model how the drug moves from one compartment to the next and how it is transformed. The pattern is to model how the drug moves from vasculature to organs interstitial medium, and from there back to vasculature and to cell’s cytosol and from there to each compartment of the cell.

BiodMet is still available at: http://pdsl.research.ge.com/

Reply
#18
January 10, 2017

Currently there is no portable device that can check diseases of the aging eye such as: Glaucoma, age-related macular degeneration, Diabetic retinopathy, Alzheimer’s Disease, Cataract, clinically significant macular edema, keratoconjunctivitis sicca (dry eye disorder), Sjogren’s syndrome, retinal hard exudates, ocular hypertension, uveitis.

We propose a portable device which when placed before one eye but without any physical contact, analyzes its natural infrared spectrum in order to detect molecules that reveal a potential medical condition. If a biomarker is detected, the device asks to the user to consult a medical doctor, with an indication about urgency but without disclosing any medical information. On contrary the doctor can securely access a wealth of information without needing a dedicated device.

The medical doctor proposes this tool to the patient, and is constantly in control of the device and the relation she has with her patient.

More information here: passive_eye_care

Reply
#19
April 7, 2017

DARPA’s AIDA
DARPA’s new AIDA program may (also) help to provide better understanding of science publications and results, by helping separating out interesting from irrelevant data.
yyy.jpg

Information complexity has exceeded the capacity of scientists to glean meaningful or actionable insights and doing science is increasingly more difficult as time passes. World class experts in one field will not understand statements made by scientists, even if only slightly outside their field.
The situation is even worse for other science stakeholders, such as scientific managers and policy makers who have a long interest in developing and maintaining a strategic understanding and evaluation of the scientific activity, field landscape, and trends. Information obtained from scientific publishing are often analyzed without their contexts. Often because of the complexity and superabundance of information, independent analysis results in interpretation which may be inaccurate.
It would be interesting to overcome the noisy, and often conflicting assertions made in today’s scientific publishing environment through a common tooling. Some efforts have already been done, for example the excellent Galaxy tool in Biology and to a less extend Notebook interfaces like Jupyter in coding. Another interesting trend is the pre-print activity which helps share information unsuitable for publishing with other scientists.
DARPA’s AIDA program aims to create technology capable of aggregating and mapping pieces of information. AIDA may provide a multi-hypothesis “semantic engine” that would automatically mine multiple publishing source and extract their common foreground assertions and background knowledge, and then it will generate and explore multiple hypotheses that will interrogate their true nature and implications.
The AIDA program hopes to determine a confidence level for each piece of information, as well as for each hypothesis generated by the semantic engine. The program will also endeavor to digest and make sense of information or data in its original form and then generate alternate contexts by adjusting or shifting variables and probabilities in order to enhance accuracy and resolve ambiguities in line with real-science expectations.
Even structured data can vary in the expressiveness, semantics, and specificity of their representations. AIDA has the potential to help scientists and science decision makers refine their analyses so that they are more in line with the larger and more complete overall context, and in doing so achieve a more thorough understanding of the elements and forces shaping science.

Reply
#20
August 3, 2017

When one claim to design an early detector of some illness, it is important to think about early detection in healthcare in general, what are the benefits, inefficiencies and the new problems it creates.

As a personal anecdote about usefulness of early detectors, I had a skin carcinoma, that two doctors (the family and the company MDs) saw without reacting for years, one of them even asked me what it was. At the end it was a cardiologist who told me it was probably a carcinoma and that I had to consult quickly a specialist.
MDs have to know what to make of the tests results of those devices. For example some medical organizations start to provide free kit for genetic screening for some conditions [0], as we know some drugs work well for some genomes but less for others, which is a concept a bit weird in itself but very fashionable at the moment.
But those kits do not work the same way, so their results are not comparable with each others, some may analyze the DNA in blood, while others may take a sample with a biopsy needle. Neither can claim to capture the full picture of the tumor’s mutations. In addition tumors’ genome evolves very quickly and is not homogeneous, it is as if many mutations are branching out quickly from a common ancestor cell. At some time later a tumor is the site of several unrelated mutations.
Some tests sometimes provide conflicting or overlapping results from the same patient. Researchers at the University of California, San Diego, published a 168-patient study on discordance in early 2016 that shows there are overlap as well as differences, between DNA analyses from biopsies of tissue and blood samples.
Some tests even make suggestions for drugs, studies have shown that different commercial solutions may in some cases suggest different drugs, or do not suggest drugs that a MD would have prescribed. Those commercial products need to improve, and doctors’ professional bodies need to develop guidelines to teach how to cope with those new tools.

Another thing is the false negative, the press reported recently an unfortunate case where a women felt something was wrong with her baby, in the last months of her pregnancy. She then used a fetal Doppler and found an heart beat, unfortunately the baby was stillborn. It is possible that if she had not used her fetal Doppler, she would have gone immediately to her hospital which may have saved the baby.
False positive are another problem, as an older man I am regularly reminded to check for PSA by the state health insurance, PSA (prostate-specific antigen) is a marker of prostate cancer. I am aware of the risk of cancer, but two large studies, one in US and two in Europe told that for a thousand people screened positively, one man will probably be saved, but several dozens will suffer severe degradation in their life quality and health in general.

The testing process may also induce traveling cost for the patient, lost of time and revenues, incomfort or even suffering, especially in women healthcare. Unnecessary biopsies and other medical procedures for people who are wrongly diagnosed or whose cancer might never have spread, can also hasten health problems.
While early detectors might seem a good idea in general, one problem is the anxiety they generate, for example even if everything is right, it does not mean everything will stay right in the future so there is a constant urge to re-check. Even medical doctors could succumb to cognitive bias, when they find “something” in mammography, then ask for more tests which are negative but nevertheless urge to have more frequent testing in the future, creating unnecessary anxiety for the patient[1].

What does all this mean for a designer of an early detector of heart failure? Certainly that there is a need to not make big unwise claims. There is also a need to collaborate with real doctors, not only scientists.
At the same time how to attract attention of people to make them use it and finance R&D ?

[0] http://www.xconomy.com/national/2017/05/31/in-maine-making-cancer-dna-tests-free-and-asking-tough-questions/
[1] https://blogs.scientificamerican.com/cross-check/why-we-overrate-the-lifesaving-power-of-cancer-tests/

Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)