Why Data Alone Isn’t Enough
And where do we go from here in the fight against escalating misinformation
It’s been about a month since my last post. I needed that time to step back and reflect — not just on what I want to write about, but on my broader strategy, goals, and role in this fight against misinformation. That fight has been amplifying at a faster pace since January, not just with the constant attack and intimidation of scientists, but with a full-blown legitimization of anti-intellectualism in our health agencies and the blatant glorification of ignorance altogether. This space has always been about more than just sharing facts; it’s about building trust, sharpening tools, and finding better ways to reach people. It is a constant learning process for me, too.
I appreciate your patience, and thank you for being here. Your engagement makes this work feel worthwhile because combating misinformation isn’t a solo effort; it’s a collective one. Please complete the poll at the end of this newsletter!
Teching It Apart runs entirely thanks to reader support. Every post, whether it's a deep dive or a timely analysis, takes time, energy, and a lot of behind-the-scenes work. If you’re finding value here and want to help keep it going, consider upgrading to a paid subscription.
A Bit of Background
My father wasn’t an educated man (albeit a good businessman), but he was curious. It took long, patient conversations with him — about climate change, about American politics, about the importance of social safety nets— to slowly change his mind about certain things. That experience, along with years of work online tackling dis/misinformation, has taught me something important: evidence alone isn’t enough. What really matters is dialogue that is trusted, human, and patient.
I know — it’s hard, especially in times like these. Every day feels like another chapter in a book called “I Told You So.” But now more than ever (and in the near future as policies take hold and shape fates), many people will be reckoning with the consequences of their choices. As tempting as it is to say nothing or to gloat, it’s imperative that we offer a landing pad. As hard as it may be (even for me), that’s where real change begins. It is, when you think about it, a scalable model, but it requires active participation from us ALL.
Data Alone Isn’t Enough
I started science communication online with data, right in line with my work as an instructor at Cornell, teaching subjects such as math, nanofabrication, semiconductor physics, and power electronics (we built a really cool Class D amplifier). My work in semiconductors and process engineering revolves around measurement, statistics, first principles, failure analysis, and risk management. Consequently, as a science and risk communicator, I would come to lean heavily on data to make my case. But I learned over the years doing this that data alone isn’t persuasive when people already have feelings about what that data “means.” You can hand someone all the charts in the world, but if those numbers challenge their identity, self-esteem or worldview, they’ll simply bounce off.
That’s why I now distinguish between risk and safety. Risk can be quantified, the product of severity and likelihood (and detection or mitigation). Safety, however, is subjective: it’s how we feel about that risk. Even when the numbers say the risk is low, feelings can amplify it, distort it, or override it entirely. Ask anyone with a fear of flying — it isn’t rational, but rather emotionally driven. That’s why public health messaging that simply tells people what to do often fails. It speaks in terms of risk, while people are reacting in terms of safety. Messaging like “Do this, don’t do that” sounds like command and control, and it lands on ears already primed with distrust.
Instead, we need to pivot: explain the how and the why, among others. Give people a framework that lets them see themselves as active participants in their own health decisions. That shift, from passive listeners to engaged decision-makers, is empowering.
And empowerment is exactly what misinformation and wellness movements have learned to weaponize. They promise agency, the thrill of “doing your own research,” the feeling that you’re in control of hidden knowledge. We have to meet that energy with better tactics, not by dismissing it, but by giving people real tools, accurate frameworks, and, in a way, making them feel like they have a seat at the table of decision-making.
And THAT is exactly what I am setting out to do here with YOU. I am an educator at heart because I believe education is transformative. My closest friends will tell you I have been tutoring since elementary. I want to bring all that I have learned here, so that YOU can help disperse it, through conversations, your own content, lectures…whatever medium is most convenient.
The Role of Community Messengers
Over time, it’s become pretty clear that the goal of my work is to arm trusted community messengers. I don’t just want to share facts for their own sake. Instead, I want to give people tools they can take into their own conversations, with family, friends, and communities. If you learn something here, I hope that you don’t just keep it — you teach it. Because science spreads person to person, through people willing to take the time to explain, listen, and push back against misinformation without closing the door, and that requires a connection from a trusted voice. No one said it would be comfortable, but it is absolutely necessary. This is also why I always encourage folks to use my content online and improve upon it. No gatekeeping here.
With that being said, I’ve realized I’m best positioned to focus on two niches that I both know deeply and care about and can speak to well: statistics and medical technology. I will tackle these within the context of current events (and flawed papers) when it is appropriate, but will also discuss them as standalone topics. Outside of this, I will continue to point to the inconsistencies of wellness misinformation and rebuttal techniques.
Stats, MedTech, and Everything in between
Statistics is part of my daily work. It’s central to process control and to building new processes that are reproducible. It shapes how I make risk-based decisions: evaluating hazards, ranking risks, and weighing trade-offs. Even I can’t strip emotion completely out of risk, but statistics give me a framework that pulls me back to reality.
More importantly, after addressing countless studies over the years, I’ve realized statistics may be the single most invaluable tool for evaluating science and teaching folks how to measure risk — and one of the easiest to teach quickly..if done correctly. With just a basic framework, you can skim a paper in minutes and start asking the right questions: Is this valid? Is this reproducible?
That doesn’t mean niche expertise isn’t important. It absolutely is. But statistics give you the tools to tease out meaningful patterns, spot red flags, and recognize when a claim is being oversold. It also forces a more humbling question: Do I even know enough to interpret this responsibly?
That layer of self-check is critical in an age where too many people cite studies they haven’t read or don’t understand to justify their beliefs. And with AI hallucinations now producing convincing but false “evidence,” that problem is only getting worse.
Furthermore, statistics provide a scaffold for better risk communication, which is at the heart of the work I do here and on other platforms. Numbers give us a shared language to ground conversations that would otherwise be driven entirely by emotion. They let us translate vague fears into something tangible: What is the likelihood? What is the severity? How does this risk compare to others I face every day?
Consequently, statistics don’t just quantify — they also clarify. They help highlight trade-offs, reveal whether an effect is meaningful or trivial, and separate signal from noise. They allow us to explain why a “40% increase” in relative risk might sound scary but actually represents a very small absolute risk. In this way, statistics anchor the conversation, providing structure without stripping away the human element.
Most importantly, statistics help shift the conversation from blanket commands (“this is safe” or “this is dangerous”) to a framework that respects people’s ability to weigh risks for themselves. That sense of agency is what makes risk communication effective — and what misinformation often hijacks.
The second focus for this blog is technology, with a bigger emphasis on medical technology. I believe technology, used responsibly, can make our lives better. The applications across medicine are vast — from imaging to vaccines to data-driven diagnostics. But the same misinformation-fueled wellness movements that demonize vaccines or promote raw milk also misrepresent these technologies, twisting them into fearmongering (like CGMs) or pseudoscience.
With the rapid growth of AI, this work becomes even more important. AI is already expanding diagnostics, from reading radiology scans faster than humans to detecting cancers and heart disease earlier. It’s driving personal health tracking, powering everything from smart wearables that can flag early signs of illness, to accelerating breakthroughs in brain–machine interfaces, which could restore mobility to paralyzed patients or help people communicate in entirely new ways.
In a nutshell, medical technology is about to blow up. Its potential is enormous, but as these innovations grow, so will the ways they’re twisted. AI in diagnostics will be painted as “robots replacing your doctor” rather than a tool that augments care. Health trackers and wearables will be marketed by wellness influencers as ways to monitor “toxins” or justify expensive detoxes, rather than what they do — help detect trends. The same tools that could extend life and independence will be weaponized to fuel fear and sell pseudoscience.
Technology is such a powerful way to connect with people today. It is a vehicle to education. Everyone feels its impact — in their phones, in their homes, and increasingly in their health. By explaining how these systems actually work, people can tune into what’s really at stake: how the dismantling of public health and scientific institutions will affect their own care, their families, and their future.
In short, my goal is to use statistics and technology as tools to cut through misinformation, make risk and science feel tangible, give people better ways to evaluate evidence for themselves, and become a voice for science.
Welcome to this movement. I am so glad you are here. Together, we are just not supporting science; we are protecting our future.
-Nini
P.S. I don’t want to overwhelm your inboxes, but I would want to make sure my content at a time you can view it. Please let me know a date that works best for delivery!
I'm Nini. My expertise spans from sensor design to neural interfaces, with emphasis on nanofabrication, data science & statistics, process control, and risk analysis. I am also a wife and a mom to one little girl. TECHing it Apart emerged from my drive to share in-depth insights on topics I cover on Instagram (@niniandthebrain), where I dissect misinformation that skews public health policy and misleads consumers through poor methodology and data manipulation, as well as trends in health technology. Content here is free, but as an independent writer, I sure could use your support!