from Attronarch's Athenaeum

DriveThruRPG is running a New Year, New Game Titles sale, with up to 75% off on select titles. The sale runs until January 16th.

Here are six old-school systems worth getting:

  1. Dungeons & Dragons Original Edition. The game that kicked it all off. Still very playable, still very inspirational.
  2. Dungeons & Dragons Rules Cyclopedia. The best edition of Classic D&D, combining content of Basic, Expert, Companion, and Master into one tome.
  3. Arduin Trilogy. Contains first three Arduin Grimoires, which were supplements for OD&D.
  4. Arduin II. Combines and restates Arduin Grimoires into a self-contained game.
  5. The Palladium Fantasy Role-Playing Game Revised Edition. Reads like a heartbreaker based on Advanced Dungeons & Dragons. It has a ton of interesting concepts and ideas, knows nothing of “balance,” and features a lot of nice art.
  6. Basic Roleplaying: Universal Game Engine. An original d100 engine that powers RuneQuest, Call of Cthulhu, and many other games, all in one tome.

And here are six OSR systems worth checking out:

There are more than 4000 titles that are discounted—do let me know your favourites. And as always, spend responsibly!

#Sale #OSR

 
Read more...

from SmarterArticles

Every day, billions of people tap, swipe, and type their lives into digital platforms. Their messages reveal emerging slang before dictionaries catch on. Their search patterns signal health crises before hospitals fill up. Their collective behaviours trace economic shifts before economists can publish papers. This treasure trove of human insight sits tantalisingly close to platform operators, yet increasingly out of legal reach. The question haunting every major technology company in 2026 is deceptively simple: how do you extract meaning from user content without actually seeing it?

The answer lies in a fascinating collection of mathematical techniques collectively known as privacy-enhancing technologies, or PETs. These are not merely compliance tools designed to keep regulators happy. They represent a fundamental reimagining of what data analysis can look like in an age where privacy has become both a legal requirement and a competitive differentiator. The global privacy-enhancing technologies market, valued at approximately USD 3.17 billion in 2024, is projected to explode to USD 28.4 billion by 2034, growing at a compound annual growth rate of 24.5 percent. That growth trajectory tells a story about where the technology industry believes the future lies.

This article examines the major privacy-enhancing technologies available for conducting trend analysis on user content, explores the operational and policy changes required to integrate them into analytics pipelines, and addresses the critical question of how to validate privacy guarantees in production environments.

The Privacy Paradox at Scale

Modern platforms face an uncomfortable tension that grows more acute with each passing year. On one side sits the undeniable value of understanding user behaviour at scale. Knowing which topics trend, which concerns emerge, and which patterns repeat allows platforms to improve services, detect abuse, and generate the insights that advertisers desperately want. On the other side sits an increasingly formidable wall of privacy regulations, user expectations, and genuine ethical concerns about surveillance capitalism.

The regulatory landscape has fundamentally shifted in ways that would have seemed unthinkable a decade ago. The General Data Protection Regulation (GDPR) in the European Union can impose fines of up to four percent of global annual revenue or twenty million euros, whichever is higher. Since 2018, GDPR enforcement has resulted in 2,248 fines totalling almost 6.6 billion euros, with the largest single fine being Meta's 1.2 billion euro penalty in May 2023 for transferring European user data to the United States without adequate legal basis. The California Consumer Privacy Act and its successor, the California Privacy Rights Act, apply to for-profit businesses with annual gross revenue exceeding USD 26.625 million, or those handling personal information of 100,000 or more consumers. By 2025, over twenty US states have enacted comprehensive privacy laws with requirements similar to GDPR and CCPA.

The consequences of non-compliance extend far beyond financial penalties. Companies face reputational damage that can erode customer trust for years. The 2024 IBM Cost of a Data Breach Report reveals that the global average data breach cost has reached USD 4.88 million, representing a ten percent increase from the previous year. This figure encompasses not just regulatory fines but also customer churn, remediation costs, and lost business opportunities. Healthcare organisations face even steeper costs, with breaches in that sector averaging USD 10.93 million, the highest of any industry for the fourteenth consecutive year.

Traditional approaches to this problem treated privacy as an afterthought. Organisations would collect everything, store everything, analyse everything, and then attempt to bolt on privacy protections through access controls and anonymisation. This approach has proven inadequate. Researchers have repeatedly demonstrated that supposedly anonymised datasets can be re-identified by combining them with external information. A landmark 2006 study showed that 87 percent of Americans could be uniquely identified using just their date of birth, gender, and ZIP code. The traditional model of collect first, protect later is failing, and the industry knows it.

Differential Privacy Comes of Age

In 2006, Cynthia Dwork, working alongside Frank McSherry, Kobbi Nissim, and Adam Smith, published a paper that would fundamentally reshape how we think about data privacy. Their work, titled “Calibrating Noise to Sensitivity in Private Data Analysis,” introduced the mathematical framework of differential privacy. Rather than trying to hide individual records through anonymisation, differential privacy works by adding carefully calibrated statistical noise to query results. The noise is calculated in a way that makes it mathematically impossible to determine whether any individual's data was included in the dataset, while still allowing accurate aggregate statistics to emerge from sufficiently large datasets.

The beauty of differential privacy lies in its mathematical rigour. The framework introduces two key parameters: epsilon and delta. Epsilon represents the “privacy budget” and quantifies the maximum amount of information that can be learned about any individual from the output of a privacy-preserving algorithm. A smaller epsilon provides stronger privacy guarantees but typically results in less accurate outputs. Delta represents the probability that the privacy guarantee might fail. Together, these parameters allow organisations to make precise, quantifiable claims about the privacy protections they offer.

In practice, epsilon values often range from 0.1 to 1 for strong privacy guarantees, though specific applications may use higher values when utility requirements demand it. The cumulative nature of privacy budgets means that each query against a dataset consumes some of the available privacy budget. Eventually, repeated queries exhaust the budget, requiring either a new dataset or acceptance of diminished privacy guarantees. This constraint forces organisations to think carefully about which analyses truly matter.

Major technology companies have embraced differential privacy with varying degrees of enthusiasm and transparency. Apple has been a pioneer in implementing local differential privacy across iOS and macOS. The company uses the technique for QuickType suggestions (with an epsilon of 16) and emoji suggestions (with an epsilon of 4). Apple also uses differential privacy to learn iconic scenes and improve key photo selection for the Memories and Places iOS apps.

Google's differential privacy implementations span Chrome, YouTube, and Maps, analysing user activity to improve experiences without linking noisy data with identifying information. The company has made its differential privacy library open source and partnered with Tumult Labs to bring differential privacy to BigQuery. This technology powers the Ads Data Hub and enabled the COVID-19 Community Mobility Reports that provided valuable pandemic insights while protecting individual privacy. Google's early implementations date back to 2014 with RAPPOR for collecting statistics about unwanted software.

Microsoft applies differential privacy in its Assistive AI with an epsilon of 4. This epsilon value has become a policy standard across Microsoft use cases for differentially private machine learning, applying to each user's data over a period of six months. Microsoft also uses differential privacy for collecting telemetry data from Windows devices.

The most ambitious application of differential privacy came from the United States Census Bureau for the 2020 Census. This marked the first time any federal government statistical agency applied differential privacy at such a scale. The Census Bureau established accuracy targets ensuring that the largest racial or ethnic group in any geographic entity with a population of 500 or more persons would be accurate within five percentage points of their enumerated value at least 95 percent of the time. Unlike previous disclosure avoidance methods such as data swapping, the differential privacy approach allows the Census Bureau to be fully transparent about its methodology, with programming code and settings publicly available.

Federated Learning and the Data That Never Leaves

If differential privacy protects data by adding noise, federated learning protects data by ensuring it never travels in the first place. This architectural approach to privacy trains machine learning models directly on user devices at the network's edge, eliminating the need to upload raw data to the cloud entirely. Users train local models on their own data and contribute only the resulting model updates, called gradients, to a central server. These updates are aggregated to create a global model that benefits from everyone's data without anyone's data ever leaving their device.

The concept aligns naturally with data minimisation principles enshrined in regulations like GDPR. By design, federated learning structurally embodies the practice of collecting only what is necessary. Major technology companies including Google, Apple, and Meta have adopted federated learning in applications ranging from keyboard prediction (Gboard) to voice assistants (Siri) to AI assistants on social platforms.

Beyond machine learning, the same principles apply to analytics through what Google calls Federated Analytics. This approach supports basic data science needs such as counts, averages, histograms, quantiles, and other SQL-like queries, all computed locally on devices and aggregated without centralised data collection. Analysts can learn aggregate model metrics, popular trends and activities, or geospatial location heatmaps without ever seeing individual user data.

The technical foundations have matured considerably. TensorFlow Federated is Google's open source framework designed specifically for federated learning research and applications. PyTorch has also become increasingly popular for federated learning through extensions and specialised libraries. These tools make the technology accessible to organisations beyond the largest technology companies.

An interesting collaboration emerged from the pandemic response. Apple and Google's Exposure Notification framework includes an analytics component that uses distributed differential privacy with a local epsilon of 8. This demonstrates how federated approaches can be combined with differential privacy for enhanced protection.

However, federated learning presents its own challenges. The requirements of privacy and security in federated learning are inherently conflicting. Privacy necessitates the concealment of individual client updates, while security requires some disclosure of client updates to detect anomalies like adversarial attacks. Research gaps remain in handling non-identical data distributions across devices and defending against attacks.

Homomorphic Encryption and Computing on Secrets

Homomorphic encryption represents what cryptographers sometimes call the “holy grail” of encryption: the ability to perform computations on encrypted data without ever decrypting it. The results of these encrypted computations, when decrypted, match what would have been obtained by performing the same operations on the plaintext data. This means sensitive data can be processed, analysed, and transformed while remaining encrypted throughout the entire computation pipeline.

As of 2024, homomorphic encryption has moved beyond theoretical speculation into practical application. Privacy technologies have advanced greatly and become not just academic or of theoretical interest but ready to be applied and increasingly practical. The technology particularly shines in scenarios requiring secure collaboration across organisational boundaries where trust is limited.

In healthcare, comprehensive frameworks now enable researchers to conduct collaborative statistical analysis on health records while preserving privacy and ensuring security. These frameworks integrate privacy-preserving techniques including secret sharing, secure multiparty computation, and homomorphic encryption. The ability to analyse encrypted medical data has applications in drug development, where multiple parties need to use datasets without compromising patient confidentiality.

Financial institutions leverage homomorphic encryption for fraud detection across institutions without exposing customer data. Banks can collaborate on anti-money laundering efforts without revealing their customer relationships.

The VERITAS library, presented at the 2024 ACM Conference on Computer and Communications Security, became the first library supporting verification of any homomorphic operation, demonstrating practicality for various applications with less than three times computation overhead compared to the baseline.

Despite these advances, significant limitations remain. Encryption introduces substantial computational overhead due to the complexity of performing operations on encrypted data. Slow processing speeds make fully homomorphic encryption impractical for real-time applications, and specialised knowledge is required to effectively deploy these solutions.

Secure Multi-Party Computation and Collaborative Secrets

Secure multi-party computation, or MPC, takes a different approach to the same fundamental problem. Rather than computing on encrypted data, MPC enables multiple parties to jointly compute a function over their inputs while keeping those inputs completely private from each other. Each party contributes their data but never sees anyone else's contribution, yet together they can perform meaningful analysis that would be impossible if each party worked in isolation.

The technology has found compelling real-world applications that demonstrate its practical value. The Boston Women's Workforce Council has used secure MPC to measure gender and racial wage gaps in the greater Boston area. Participating organisations contribute their payroll data through the MPC protocol, allowing analysis of aggregated data for wage gaps by gender, race, job category, tenure, and ethnicity without revealing anyone's actual wage.

The global secure multiparty computation market was estimated at USD 794.1 million in 2023 and is projected to grow at a compound annual growth rate of 11.8 percent from 2024 to 2030. In June 2024, Pyte, a secure computation platform, announced additional funding bringing its total capital to over USD 12 million, with patented MPC technology enabling enterprises to securely collaborate on sensitive data.

Recent research has demonstrated the feasibility of increasingly complex MPC applications. The academic conference TPMPC 2024, hosted by TU Darmstadt's ENCRYPTO group, showcased research proving that complex tasks like secure inference with Large Language Models are now feasible with today's hardware. A paper titled “Sigma: Secure GPT Inference with Function Secret Sharing” showed that running inference operations on an encrypted 13 billion parameter model achieves inference times of a few seconds per token.

Partisia has partnered with entities in Denmark, Colombia, and the United States to apply MPC in healthcare analytics and cross-border data exchange. QueryShield, presented at the 2024 International Conference on Management of Data, supports relational analytics with provable privacy guarantees using MPC.

Synthetic Data and the Privacy of the Artificial

While the previous technologies focus on protecting real data during analysis, synthetic data generation takes a fundamentally different approach. Rather than protecting real data through encryption or noise, it creates entirely artificial datasets that maintain the statistical properties and patterns of original data without containing any actual sensitive information. By 2024, synthetic data has established itself as an essential component in AI and analytics, with estimates indicating 60 percent of projects now incorporate synthetic elements. The market has expanded from USD 0.29 billion in 2023 toward projected figures of USD 3.79 billion by 2032, representing a 33 percent compound annual growth rate.

Modern synthetic data creation relies on sophisticated approaches including Generative Adversarial Networks and Variational Autoencoders. These neural network architectures learn the underlying distribution of real data and generate new samples that follow the same patterns without copying any actual records. The US Department of Homeland Security Science and Technology Directorate awarded contracts in October 2024 to four startups to develop privacy-enhancing synthetic data generation capabilities.

Several platforms have emerged as leaders in this space. MOSTLY AI, based in Vienna, uses its generative AI platform to create highly accurate and private tabular synthetic data. Rockfish Data, based on foundational research at Carnegie Mellon University, developed a high-fidelity privacy-preserving platform. Hazy specialises in privacy-preserving synthetic data for regulated industries and is now part of SAS Data Maker.

Research published in Scientific Reports demonstrated that synthetic data can maintain similar utility (predictive performance) as real data while preserving privacy, supporting compliance with GDPR and HIPAA.

However, any method to generate synthetic data faces an inherent tension. The goals of imitating the statistical distributions in real data and ensuring privacy are sometimes in conflict, leading to a trade-off between usefulness and privacy.

Trusted Execution Environments and Hardware Sanctuaries

Moving from purely mathematical solutions to hardware-based protection, trusted execution environments, or TEEs, take yet another approach to privacy-preserving computation. Rather than mathematical techniques, TEEs rely on hardware features that create secure, isolated areas within a processor where code and data are protected from the rest of the system, including privileged software like the operating system or hypervisor.

A TEE acts as a black box for computation. Input and output can be known, but the state inside the TEE is never revealed. Data is only decrypted while being processed within the CPU package and automatically encrypted once it leaves the processor, making it inaccessible even to the system administrator.

Two main approaches have emerged in the industry. Intel's Software Guard Extensions (SGX) pioneered process-based TEE protection, dividing applications into trusted and untrusted components with the trusted portion residing in encrypted memory. AMD's Secure Encrypted Virtualisation (SEV) later brought a paradigm shift with VM-based TEE protection, enabling “lift-and-shift” deployment of legacy applications. Intel has more recently implemented this paradigm in Trust Domain Extensions (TDX).

A 2024 research paper published in ScienceDirect provides comparative evaluation of TDX, SEV, and SGX implementations. The power of TEEs lies in their ability to perform computations on unencrypted data (significantly faster than homomorphic encryption) while providing robust security guarantees.

Major cloud providers have embraced TEE technology. Azure Confidential VMs run virtual machines with AMD SEV where even Microsoft cannot access customer data. Google Confidential GKE offers Kubernetes clusters with encrypted node memory.

Zero-Knowledge Proofs and Proving Without Revealing

Zero-knowledge proofs represent a revolutionary advance in computational integrity and privacy technology. They enable the secure and private exchange of information without revealing underlying private data. A prover can convince a verifier that a statement is true without disclosing any information beyond the validity of the statement itself.

In the context of data analytics, zero-knowledge proofs allow organisations to prove properties about their data without exposing the data. Companies like Inpher leverage zero-knowledge proofs to enhance the privacy and security of machine learning solutions, ensuring sensitive data used in training remains confidential while still allowing verification of model properties.

Zero-Knowledge Machine Learning (ZKML) integrates machine learning with zero-knowledge testing. The paper “zkLLM: Zero Knowledge Proofs for Large Language Models” addresses a challenge within AI legislation: establishing authenticity of outputs generated by Large Language Models without compromising the underlying training data. This intersection of cryptographic proofs and neural networks represents one of the most promising frontiers in privacy-preserving AI.

The practical applications extend beyond theoretical interest. Financial institutions can prove solvency without revealing individual account balances. Healthcare researchers can demonstrate that their models were trained on properly consented data without exposing patient records. Regulatory auditors can verify compliance without accessing sensitive business information. Each use case shares the same underlying principle: proving a claim's truth without revealing the evidence supporting it.

Key benefits include data privacy (computations on sensitive data without exposure), model protection (safeguarding intellectual property while allowing verification), trust and transparency (enabling auditable AI systems), and collaborative innovation across organisational boundaries. Challenges hindering widespread adoption include substantial computing power requirements for generating and verifying proofs, interoperability difficulties between different implementations, and the steep learning curve for development teams unfamiliar with cryptographic concepts.

Operational Integration of Privacy-Enhancing Technologies

Deploying privacy-enhancing technologies requires more than selecting the right mathematical technique. It demands fundamental changes to how organisations structure their analytics pipelines and governance processes. Gartner predicts that by 2025, 60 percent of large organisations will use at least one privacy-enhancing computation technique in analytics, business intelligence, or cloud computing. Reaching this milestone requires overcoming significant operational challenges.

PETs typically must integrate with additional security and data tools, including identity and access management solutions, data preparation tooling, and key management technologies. These integrations introduce overheads that should be assessed early in the decision-making process. Organisations should evaluate the adaptability of their chosen PETs, as scope creep and requirement changes are common in dynamic environments. Late changes in homomorphic encryption and secure multi-party computation implementations can negatively impact time and cost.

Performance considerations vary significantly across technologies. Homomorphic encryption is typically considerably slower than plaintext operations, making it unsuitable for latency-sensitive applications. Differential privacy may degrade accuracy for small sample sizes. Federated learning introduces communication overhead between devices and servers. Organisations must match technology choices to their specific use cases and performance requirements.

Implementing PETs requires in-depth technical expertise. Specialised skills such as cryptography expertise can be hard to find, often making in-house development of PET solutions challenging. The complexity extends to procurement processes, necessitating collaboration between data governance, legal, and IT teams.

Policy changes accompany technical implementation. Organisations must establish clear governance frameworks that define who can access which analyses, how privacy budgets are allocated and tracked, and what audit trails must be maintained. Data retention policies need updating to reflect the new paradigm where raw data may never be centrally collected.

The Centre for Data Ethics and Innovation categorises PETs into traditional approaches (encryption in transit, encryption at rest, and de-identification techniques) and emerging approaches (homomorphic encryption, trusted execution environments, multiparty computation, differential privacy, and federated analytics). Effective privacy strategies often layer multiple techniques together.

Validating Privacy Guarantees in Production

Theoretical privacy guarantees must be validated in practice. Small bugs in privacy-preserving software can easily compromise desired protections. Production tools should carefully implement primitives, following best practices in secure software design such as modular design, systematic code reviews, comprehensive test coverage, regular audits, and effective vulnerability management.

Privacy auditing has emerged as an important research area supporting the design and validation of privacy-preserving mechanisms. Empirical auditing techniques establish practical lower bounds on privacy leakage, complementing the theoretical upper bounds provided by differential privacy.

Canary-based auditing tests privacy guarantees by introducing specially designed examples, known as canaries, into datasets. Auditors then test whether these canaries can be detected in model outputs. Research on privacy attacks for auditing spans five main categories: membership inference attacks, data-poisoning attacks, model inversion attacks, model extraction attacks, and property inference.

A paper appearing at NeurIPS 2024 on nearly tight black-box auditing of differentially private machine learning demonstrates that rigorous auditing can detect bugs and identify privacy violations in real-world implementations. However, the main limitation is computational cost. Black-box auditing typically requires training hundreds of models to empirically estimate error rates with good accuracy and confidence.

Continuous monitoring addresses scenarios where data processing mechanisms require regular privacy validation. The National Institute of Standards and Technology (NIST) has developed draft guidance on evaluating differential privacy protections, fulfilling a task under the Executive Order on AI. The NIST framework introduces a differential privacy pyramid where the ability for each component to protect privacy depends on the components below it.

DP-SGD (Differentially Private Stochastic Gradient Descent) is increasingly deployed in production systems and supported in open source libraries like Opacus, TensorFlow, and JAX. These libraries implement auditing and monitoring capabilities that help organisations validate their privacy guarantees in practice.

Selecting the Right Technology for Specific Use Cases

With multiple privacy-enhancing technologies available, organisations face the challenge of selecting the right approach for their specific needs. The choice depends on several factors: the nature of the data, the types of analysis required, the computational resources available, the expertise of the team, and the regulatory environment.

Differential privacy excels when organisations need aggregate statistics from large datasets and can tolerate some accuracy loss. It provides mathematically provable guarantees and has mature implementations from major technology companies. However, it struggles with small sample sizes where noise can overwhelm the signal.

Federated learning suits scenarios where data naturally resides on distributed devices and where organisations want to train models without centralising data. It works well for mobile applications, IoT deployments, and collaborative learning across institutions.

Homomorphic encryption offers the strongest theoretical guarantees by keeping data encrypted throughout computation, making it attractive for highly sensitive data. The significant computational overhead limits its applicability to scenarios where privacy requirements outweigh performance needs.

Secure multi-party computation enables collaboration between parties who do not trust each other, making it ideal for competitive analysis, industry-wide fraud detection, and cross-border data processing.

Synthetic data provides the most flexibility after generation, as synthetic datasets can be shared and analysed using standard tools without ongoing privacy overhead.

Trusted execution environments offer performance advantages over purely cryptographic approaches while still providing hardware-backed isolation.

Many practical deployments combine multiple technologies. Federated learning often incorporates differential privacy for additional protection of aggregated updates. The most robust privacy strategies layer complementary protections rather than relying on any single technology.

Looking Beyond the Technological Horizon

The market for privacy-enhancing technologies is expected to mature with improved standardisation and integration, creating new opportunities in privacy-preserving data analytics and AI. The outlook is positive, with PETs becoming foundational to secure digital transformation globally.

However, PETs are not a silver bullet nor a standalone solution. Their use comes with significant risks and limitations ranging from potential data leakage to high computational costs. They cannot substitute existing laws and regulations but rather complement these in helping implement privacy protection principles. Ethically implementing PETs is essential. These technologies must be designed and deployed to protect marginalised groups and avoid practices that may appear privacy-preserving but actually exploit sensitive data or undermine privacy.

The fundamental insight driving this entire field is that privacy and utility are not necessarily zero-sum. Through careful application of mathematics, cryptography, and system design, organisations can extract meaningful insights from user content while enforcing strict privacy guarantees. The technologies are maturing. The regulatory pressure is mounting. The market is growing. The question is no longer whether platforms will adopt privacy-enhancing technologies for their analytics, but which combination of techniques will provide the best balance of utility and risk mitigation for their specific use cases.

What is clear is that the era of collecting everything and figuring out privacy later has ended. The future belongs to those who can see everything while knowing nothing.

References & Sources


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

 
Read more... Discuss...

from The Catechetic Converter

Masculinity itself, bearded and in black, resting with an arm draped on an old candlestick, an Egyptian sarcophagus leaning against the wall to the right

In my recent series on the commemorations for the week between Christmas Day and the Feast of the Holy Name, I managed to generate some small degree of controversy and discussion when I mentioned that the Massacre of the Innocents as recorded in Matthew’s gospel likely did not actually happen. As I said in that piece, there is no extra-biblical historical evidence that the event occurred. Even one of the most important historians we rely on, Josephus, did not include any mention of Herod killing scores of infants—and Josephus does not hold back on his criticisms of the Herodians.

Now, I am willing to admit one caveat, and that is there’s a theory that maybe the massacre happened, but it wasn’t a large-scale event involving thousands of children. Given the limitations of the information supplied by Magi, how many kids could have been born in the narrow window of a) when the star appeared and, b) in Bethlehem? If this was the case, then Herod ordering the murder of a small number of kids would hardly register among all the horrible things he was known for and might not garner a mention from historians and scribes of the time.

However, I still hold to the contention that Matthew is not interested in recording precise, “accurate” (as we understand the term) history so much so as writing a story that he feels is true in regards to Jesus. Perhaps the causes that lead to the Holy Family taking flight to Egypt were more of a slow-burn situation, where young kids were likely to die and so, inspired by God, Joseph takes Jesus and Mary (in an event that echoes also another Joseph—one who managed to shelter the patriarchs in Egypt during a time of famine). Matthew wants to express the dire situation to the Church and so telling a story about Herod murdering kids communicates that truth. It’s a story that feels true to who Herod is and is a kind of short-hand way of helping Christians born years, even decades, after the event understand what a monster the man was.

But this all serves as a way to address an important elephant when it comes to reading the Bible: it isn’t always historically accurate. This provides grounds for a kind of crisis of faith when we treat the whole “divine inspiration” thing in terms of what we call today “biblical inerrancy.” In other words, if the Bible is a book that God basically dictated to various writers and is, therefore, God’s actual words on paper, then what are we to make of things when the Bible and facts don’t line up? Is God lying to us? Does God get His facts mixed up? Or is there some demonic plot being enacted by historians and scholars to try and discredit the Bible? (This latter thing was basically the view of my church growing up; if the Bible and facts didn’t agree, then it was facts that needed to change—we can see such thinking happening in certain political circles today, but I digress).

In order to discuss this, we’ll need to break a few things down—namely, what we mean by “history” and what is meant by “divine inspiration.”

WHAT IS HISTORY?

History seems like a straightforward thing. It is the discipline of chronicling past events so that we can keep posterity and revisit what has come before, right? Yes. But our modern conception of history is something a bit different from what our ancestors thought of when they conceived of history. See, our current understanding of history is shaped by the scientific method, which came about in the 1700s. Prior to this, the phenomena of our world were seen in terms of analogy. Take, for instance, reproduction. We continue to use vestigial language from our agrarian past to speak of how organisms reproduce, language like “seed” and “fertility.” The word sperm comes from the Greek sperma which means “seed.” So, for much of human history, we saw all forms of reproduction as analogous to agriculture: a seed is planted in a fertile space where new life emerges. It wasn’t until the invention of the microscope and the advent of the scientific method that we began to challenge this analogy and see if there’s something else going on. What emerged during this time was the concept of facts.

Prior to the mid-16th century, the Latin term factum referred simply to “a thing done or performed.” This usage is still common in the legal realm. Those of us who grew up with Dragnet recall Joe Friday regularly saying to witnesses, “just the facts.” In other words, recall the events without commentary or elucidation. Deborah went to the store at 5:15 in the evening. But, with the emergence of modern science, facts began to take on greater precedence. Facts were considered pure and superior, a distillation of the essence of a thing. Facts represent something that is observable and repeatable. Deborah can go to the store at 5:15 and so can I. What I can’t do is inhabit Deborah’s frame of mind. I can’t know what she was thinking as she walked to the store, how happy or unhappy she might have been. The fleeting thoughts and emotions she felt during that stroll. These are all unique to her, making them not reproducible and, therefore, useless in terms of data. They are extraneous, important to Deborah perhaps, but not important for finding out if Deborah saw James fleeing the scene of Jesse’s murder, which happened across from the store at around 5:25.

Thomas Jefferson famously applied such thinking to the Gospels. Since miracles and other supernatural events are not reproducible, repeating and measurable phenomena, Jefferson stripped the gospels of any reference to them. Jefferson believed this made for a more “true” Gospel because it was a gospel of facts. The bias of the scientific method is that facts are truth. If something is not factual then it isn’t true. And something is only factual if it is an observable, repeatable event free from extraneous conditions. Deborah can go to the store at 5:15 regardless of whether she’s happy or sad or praying or thinking about the baseball game. Those things are ancillary to facts. What is personal to her is not, objectively, true according to modern science.

So when we record history, we now aim to be as factual as possible. I used to be a journalist and journalism is a key resource for historians. The discipline of journalism is to write things as dispassionately as possible, removing your own feelings and commentary and presenting things as “factually” as one can, leaving the reader to decide how to think and feel about those things.

Now, I’m not here to argue against facts. Facts are important. I’m simply attempting to demonstrate that, one, the prioritizing of facts is a relatively recent event in human history and, two, perhaps to suggest that facts leave a lot of things out of a story.

When I was a journalist I was also a creative writer, working on a novel, and getting short fiction and poems published in TINY journals and publications (I did manage to get once piece of fairly unhinged “fan mail” for a five-line poem that was picked up by a publication that was simply photo copied sheets of paper to be stuck onto bulletin boards and whatnot). Creative writing gives texture to facts. That’s where we dwell on Deborah’s frustration that the short-stop dropped the ball in the bottom of the ninth, causing the other team to get two runners to home plate, costing her team the game—and that this frustration mirrors the frustration she feels that her husband is always working too late to go to the store and grab a gallon milk for the house, leaving her to have to do it and making her feel like the center-fielder who had to make up for the short-stop’s mistake. Indeed, the creative writer will say that the real story is found in spots like these and not the facts. Facts make for poor story-telling.

The ancients knew this. When they wrote histories, they weren’t simply recording dispassionate facts. They were telling stories, stories full of texture and meaning. Their goal was to get readers to feel the story being told. In order to do this, elements might be told out of order, or hyperbole was employed, or even, at times, what we call “fiction” was used. The facts of the story might not be straight, but the Truth absolutely was.

Here’s an example from the gospels: Jesus’ cleansing of the temple. In the Synoptics (Matthew, Mark, and Luke) it serves as a kind of crescendo to Jesus’ story. The Synoptics all depict Jesus moving from Galilee and making His way to Jerusalem to where He enters in triumph, chases out the money-changers from the temple, which makes Him a more serious target of the religious authorities. But in John, Jesus cleanses the temple right at the beginning of His ministry, right after coming out of the desert and His 40-day-long bout with Satan. Further, John depicts Jesus going to and from Jerusalem on a regular basis. If all four gospels are true, how do we reconcile their conflicting facts? Do we say, as some have, that Jesus must have cleansed the temple twice? If that’s the case, why don’t all four gospels testify to that?

Perhaps we’re thinking of this incorrectly. We need to get back to that ancient way of thinking and consider that Truth is something that cannot be reduced down to simple facts. As Ian Markham, the dean and president of Virginia Theological Seminary, is know to say, we Christians do not read a book, we read a life; the book is important because the book testifies to the life. Given this, no true story of a life can be told only in fact. Truth moves beyond fact. And so, as a result, it doesn’t really matter when Jesus cleansed the temple. What matters is that Jesus is someone who cleanses the temple, whether as the culmination of His earthly ministry or resulting from being in the power of the Spirit after overcoming the devil in the wilderness. The facts of the story are in service to the Truth.

WHAT IS DIVINE INSPIRATION?

There are, of course, many many misunderstood passages in the Bible. Many of them are found in the writings of Saint Paul. This shouldn’t surprise us because even the Bible itself tells us that Paul is hard to understand, with Saint Peter writing:

Consider the patience of our Lord to be salvation, just as our dear friend and brother Paul wrote to you according to the wisdom given to him, speaking of these things in all his letters. Some of his remarks are hard to understand, and people who are ignorant and whose faith is weak twist them to their own destruction, just as they do the other scriptures. (2 Peter 3:15-16 CEB)

This is an important passage for a couple of reasons. First, it shows that the Church received Paul’s writings as scripture fairly early on. Second, it gives us a fun little insight into the lives of the early saints: even one of Paul’s friends—the one considered to be the first pope—has a hard time understanding what the heck he’s saying.

But one of the most broadly (and, I’d argue, dangerously) misunderstood things Paul wrote comes from a letter he wrote to his young protege named Timothy:

All scripture is given by inspiration of God, and is profitable for doctrine, for reproof, for correction, for instruction in righteousness: (2 Timothy 3:16 KJV)

And let’s also use the NIV version, since that’s arguably the one most people would know these days:

All Scripture is God-breathed and is useful for teaching, rebuking, correcting and training in righteousness, (2 Timothy 3:16 NIV)

So, here Saint Paul teaches that “all scripture” is “inspired,” which is translated as “God-breathed” in newer English versions. This leads us to the conclusion that “scripture” is something breathed from God, thus God’s very words, transcribed by holy writers. Or is it?

Before we begin to look at what it means that something is “God-breathed,” we need to take a look at the word “scripture.” We use the word exclusively for religious writings, but in its original sense “scripture” simply means “a thing written.” So, “writings.”

To put the passage literally, it would read “All writings are God-breathed.” Is this what Saint Paul is saying? That all writing is breathed out by God? Not only the Bible, but the Qur’an, the Upanishads, the Book of Mormon? Not only “religious” books but also The Catcher in the Rye, the Godzilla collectibles guide on my shelf, and the instruction manual to my TV? I don’t think this is what Saint Paul is teaching Saint Timothy.

The word “scripture” (graphe in Greek) is used exclusively in the Bible to refer to the writings of the Bible. We saw this a bit earlier with Saint Peter using the term to refer to Saint Paul’s letters. Elsewhere, it is used in reference to the books of the Old Testament. So the term seems to be applied to certain writings in this context.

Now, a lot of Christians will say that in this case “the scriptures” is simply short-hand for “the Bible.” But things are not that simple. For one, there was no such thing as “the Bible” when Paul was writing Timothy this letter. You might say “well, okay, sure; the New Testament wasn’t all written yet, but there was the Old Testament.”

It may surprise you to learn that what we think of as the Old Testament did not exist until around the 600s at the earliest. That’s 600 AD (or CE nowadays). As in, 600 years after the time of Jesus.

Now before you start writing me emails or replies on Mastodon, let me finish. I’m not saying that the writings themselves didn’t exist until then. I’m saying that the writings that make up the Old Testament as we know it were not put together into a definitive collection of 39 books (24 in rabbinic Judaism because a few of the books are consolidated and treated as a single book, notably the minor prophets) until that time. Yes there were translations of these books into Greek (called the Septuagint) and for many Christians those translations were treated as “the Bible” of their time, but given that some of the books are never referenced in the New Testament and they did not exist in a single volume, there is some question over what books were considered “official” back then. The group of rabbis known as the Masoretes were the ones who assembled the Old Testament as we know it in the 600-900s. Their list of books is what is used by Protestant Christians for the Old Testament.

This is all to say that the term “scripture” was something coming into form at the time of Saint Paul’s writing. And his use of the phrase “God-breathed” is likely a mechanism to help Saint Timothy know what writings are truly Christian and which ones to avoid. This is especially crucial given the preponderance of gnostic and anti-Gentile writings making the rounds at the time.

Perhaps seeing the passage in some wider context will help us. I am fan of the Common English Bible, so I tend to use that:

But you must continue with the things you have learned and found convincing. You know who taught you. Since childhood you have known the holy scriptures that help you to be wise in a way that leads to salvation through faith that is in Christ Jesus. Every scripture is inspired by God and is useful for teaching, for showing mistakes, for correcting, and for training character, so that the person who belongs to God can be equipped to do everything that is good. (2 Timothy 3:14-18 CEB)

It might be bad scholarship on my part, but I tend to read the passage like this: “Every scripture that is inspired by God is useful for teaching,” etc. In other words, Saint Paul is reminding Saint Timothy that he is able to discern which writings are “God-breathed” and which ones aren’t. This isn’t so much a working definition on the doctrine of scripture as much as it is a piece of practical wisdom: if it doesn’t sound like scripture, then it isn’t. It’s one of the reasons that we can say that something like the Gospel of Thomas doesn’t bear the aroma of God’s breath—it ends with Jesus telling Saint Peter that Saint Mary of Magdala will need to be reincarnated as a man in order to enter heaven. And those “God-breathed” writings serve the purpose of instruction and formation to make for good Christians.

So the Bible itself does not define itself as being the result of God dictating His words into the ears of particular people. Rather, God breathes through the words that have been written, giving those of us who know Him through prayer and devotion the means to recognize Him in particular writings. Those writings are valuable because they evoke the very breath of God—like us!—and therefore have something to say about the sort of people God wants us to be.

TRUTH IN FICTION

This brings us back to the question of fiction and the Bible. Can the Bible contain fictional material and yet remain true? Yes.

Let’s ask this question a slightly different way: can God’s breath be detected through fiction? If we say no then we risk limiting God… Given that God is sovereign and gets what He wants because He is God, it is very much the case that God can use fiction as means for declaring His truth. Indeed, the book of Job is pretty much accepted as being an intentional work of fiction, but is held dearly as a source of beauty and truth—especially for those broken-hearted and desperate for God. Aside from that, we have entire books of poetry in the Bible and poetry is a medium that is not tied to mere fact, given to expansive and hyperbolic language in order to express Truth, God’s Truth.

So, the Bible isn’t always factual. That’s okay. That doesn’t mean it isn’t true. God can use fiction to express His Truth to us. Fact or fiction doesn’t matter as much as whether or not we can detect the presence of God’s breath in the story, whether or not the story is useful for teaching us and forming us into the sort of humans He has redeemed us to become.

***

The Rev. Charles Browning II is the rector of Saint Mary’s Episcopal Church in Honolulu, Hawai’i. He is a husband, father, surfer, and frequent over-thinker. Follow him on Mastodon and Pixelfed.

 
Read more... Discuss...

Join the writers on Write.as.

Start writing or create a blog