What People Actually Build With a Raspberry Pi: Case Studies From the Field
The spec sheet for a Raspberry Pi reads like a modest embedded processor. What people actually build with one reads like an infrastructure engineer’s fever dream. Over the years, the platform has accumulated a body of real-world deployments that range from cost-effective home network appliances to production-grade industrial monitoring systems. This post examines a cross-section of those deployments — not hobbyist proof-of-concepts, but functioning systems solving real operational problems.
The case studies below span home labs, small businesses, agriculture, aviation, scientific research, and industrial environments. Each one is treated as an engineering problem: what was the requirement, what was the constraint, how was it solved, and what broke along the way.
Case Study 1: Replacing a $600 Router With a $50 Pi
The Problem
A small office with twelve workstations, two VLAN requirements (staff and guest), and a bandwidth-heavy workflow — daily video uploads to cloud storage — was running on a consumer-grade router that could not hold its ARP table under sustained load. Intermittent packet loss every few hours, requiring a manual reboot. The vendor’s firmware had not been updated in three years.
The Build
A Raspberry Pi 4 (4GB) with two USB 3.0 Ethernet adapters — one for WAN, one for LAN — running Debian Bookworm with nftables for firewalling and traffic shaping, dnsmasq for DHCP and DNS, and hostapd on a cheap USB Wi-Fi adapter for the guest SSID.
The VLANs were implemented via 802.1Q tagging on the LAN interface using a managed switch, with the Pi’s nftables rules differentiating traffic by VLAN tag.
# nftables.conf (simplified)
table ip filter {
chain forward {
type filter hook forward priority 0;
iifname "eth1.10" oifname "eth0" accept # staff to WAN
iifname "eth1.20" oifname "eth0" accept # guest to WAN
iifname "eth1.20" oifname "eth1.10" drop # guest cannot reach staff VLAN
iifname "eth1.10" oifname "eth1.20" drop # staff cannot reach guest VLAN
}
}
Traffic shaping with tc and HTB queueing kept video uploads from saturating upstream bandwidth during business hours, reserving headroom for VoIP and interactive traffic.
What Broke
The first USB Ethernet adapter failed at ninety days — a cheap unbranded unit. Replaced with a genuine ASIX AX88179-based adapter, which has stable Linux driver support. The Wi-Fi adapter overheated in an enclosed rack shelf and started dropping associations. Moved to open-air mounting with a heatsink on the Pi.
Outcome
The system ran for over two years without a reboot. CPU load average under sustained throughput sat around 0.3. The entire build cost $140 including switch. The commercial replacement quote was $600 for a Ubiquiti EdgeRouter with equivalent spec — itself a Linux router, but with managed firmware.
Key lesson: The Pi’s USB bus is a shared bottleneck. If you are routing at gigabit speeds, two USB adapters will saturate the bus before the CPU. For gigabit routing, use the onboard Ethernet for WAN and a PCIe USB card via the Pi 5’s PCIe slot for LAN. For sub-500Mbps environments, USB 3.0 adapters are sufficient.
Case Study 2: A Network-Wide Ad Blocking and DNS Logging Infrastructure
The Problem
A household with twenty-three connected devices — smart TVs, phones, tablets, laptops, and several IoT devices of unknown provenance — was generating DNS traffic that was partially leaking to manufacturer telemetry servers. The household operator wanted DNS-level ad blocking, per-device logging, and the ability to inspect DNS queries from IoT devices to understand what they were phoning home to.
The Build
Pi-hole on a Pi Zero 2 W (512MB RAM, quad-core Cortex-A53), with Unbound running as a local recursive resolver rather than forwarding queries to any upstream DNS provider. This eliminated the upstream DNS privacy problem entirely — queries are resolved directly against authoritative nameservers.
Unbound configuration for local recursive resolution:
server:
interface: 127.0.0.1
port: 5335
do-ip4: yes
do-udp: yes
do-tcp: yes
root-hints: "/var/lib/unbound/root.hints"
harden-glue: yes
harden-dnssec-stripped: yes
use-caps-for-id: no
edns-buffer-size: 1472
prefetch: yes
num-threads: 1
so-rcvbuf: 1m
private-address: 192.168.0.0/16
private-address: 10.0.0.0/8
Pi-hole was configured to forward to 127.0.0.1#5335 (Unbound) rather than any public resolver. The router’s DHCP was updated to advertise the Pi’s IP as the sole DNS server for all clients.
For IoT traffic analysis, query logs were exported via Pi-hole’s API into a simple Python script that built a per-MAC-address breakdown of top queried domains, flagging anything outside a known-good allowlist.
import requests
import json
from collections import defaultdict
PIHOLE_API = "http://192.168.1.2/admin/api.php"
TOKEN = "your_api_token_here"
def get_query_log(limit=1000):
r = requests.get(f"{PIHOLE_API}?getAllQueries={limit}&auth={TOKEN}")
data = r.json()
return data.get("data", [])
def summarize_by_client(queries):
client_domains = defaultdict(lambda: defaultdict(int))
for q in queries:
timestamp, qtype, domain, client, status = q[0], q[1], q[2], q[3], q[4]
client_domains[client][domain] += 1
return client_domains
queries = get_query_log(5000)
summary = summarize_by_client(queries)
for client, domains in sorted(summary.items()):
print(f"\n{client}:")
for domain, count in sorted(domains.items(), key=lambda x: -x[1])[:10]:
print(f" {count:>5} {domain}")
Findings
Three IoT devices were found querying domains associated with their manufacturers’ analytics platforms at regular intervals — one smart TV was making over 400 DNS requests per day to a single telemetry endpoint. These were added to the blocklist. One thermostat was querying a domain that resolved to an IP block outside the manufacturer’s known infrastructure, flagged for further investigation.
Outcome
The Pi Zero 2 W handled peak DNS load of approximately 150 queries per second without measurable latency increase. Power draw was under 1W average. The recursive resolver eliminated a class of privacy exposure entirely. The per-device query analysis revealed behavior from commercial IoT devices that would otherwise have been invisible.
Key lesson: The Pi Zero 2 W is adequate for a single-purpose DNS appliance up to approximately 200 devices. For larger deployments, the Pi 4 2GB with a dedicated power supply is more appropriate. Running Unbound rather than forwarding to a public resolver adds complexity but provides meaningful privacy gains.
Case Study 3: Agricultural Soil Monitoring Across a Vineyard
The Problem
A small family vineyard spanning eighteen acres needed soil moisture, temperature, and humidity data from twelve discrete points across the property to optimize irrigation scheduling. Commercial precision agriculture sensors ran $2,000 to $5,000 per node. The operator had a technical background and budget for approximately $800 total.
The Build
Twelve Pi Zero W nodes (original, not Zero 2) running minimal Raspberry Pi OS Lite, each paired with:
- Capacitive soil moisture sensor (resistive sensors corrode; capacitive are the only viable option for permanent installations)
- DS18B20 waterproof temperature probe buried at 10cm depth
- DHT22 for ambient air temperature and humidity at canopy level
- Solar panel (5V/2W) and 3.7V LiPo with TP4056 charge controller
The DS18B20 used the 1-Wire protocol, requiring only a single GPIO pin plus a 4.7kΩ pull-up resistor:
import glob
import time
# 1-Wire device files appear automatically when kernel module is loaded
# Add to /boot/config.txt: dtoverlay=w1-gpio
base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28*')[0]
device_file = device_folder + '/w1_slave'
def read_temp():
with open(device_file, 'r') as f:
lines = f.readlines()
while lines[0].strip()[-3:] != 'YES':
time.sleep(0.2)
with open(device_file, 'r') as f:
lines = f.readlines()
equals_pos = lines[1].find('t=')
temp_string = lines[1][equals_pos+2:]
return float(temp_string) / 1000.0
print(f"Soil temperature: {read_temp():.2f}°C")
The capacitive moisture sensor output an analog voltage. The Pi Zero W has no ADC, so an MCP3008 SPI ADC chip was added to each node for analog-to-digital conversion.
Data was transmitted over LoRaWAN using cheap SX1278 modules — a long-range, low-power radio protocol suited to multi-acre deployments where Wi-Fi coverage is impractical. A central Pi 4 with a LoRa HAT served as the gateway, running ChirpStack as the LoRaWAN network server and feeding data into InfluxDB via Python.
# Gateway receiver (simplified)
import paho.mqtt.client as mqtt
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
import json
import base64
INFLUX_URL = "http://localhost:8086"
INFLUX_TOKEN = "your_token"
INFLUX_ORG = "vineyard"
INFLUX_BUCKET = "soil_data"
client = influxdb_client.InfluxDBClient(
url=INFLUX_URL, token=INFLUX_TOKEN, org=INFLUX_ORG)
write_api = client.write_api(write_options=SYNCHRONOUS)
def on_message(mqttclient, userdata, msg):
payload = json.loads(msg.payload)
data = json.loads(base64.b64decode(payload['data']))
node_id = payload['deviceName']
point = influxdb_client.Point("soil") \
.tag("node", node_id) \
.field("moisture_pct", data['moisture']) \
.field("soil_temp_c", data['soil_temp']) \
.field("air_temp_c", data['air_temp']) \
.field("humidity_pct", data['humidity']) \
.field("battery_mv", data['battery_mv'])
write_api.write(bucket=INFLUX_BUCKET, record=point)
print(f"Logged: {node_id} → moisture={data['moisture']}%")
mqttc = mqtt.Client()
mqttc.on_message = on_message
mqttc.connect("localhost", 1883)
mqttc.subscribe("application/+/device/+/event/up")
mqttc.loop_forever()
Grafana dashboards on the central Pi displayed moisture maps, trend lines, and threshold alerts. Irrigation triggers were automated: when any node’s moisture dropped below a configurable threshold for more than two consecutive readings (to filter spurious spikes), an MQTT message was sent to a relay controller wired into the irrigation solenoids.
Challenges
Power management was the primary engineering challenge. The Pi Zero W does not have native sleep modes accessible from userspace. Nodes used an RTC module with an alarm output wired to the Pi’s power rail via a MOSFET to cut power entirely between readings, waking on the RTC alarm every fifteen minutes. This reduced average current draw from ~120mA to ~8mA, making solar + LiPo viable.
Rain damaged three enclosures in the first season — basic IP54 cases were insufficient. Replaced with proper IP67 junction boxes. Two LoRa modules showed frequency drift after thermal cycling; replaced with temperature-compensated TCXO variants.
Outcome
Total build cost was $760 for twelve nodes plus the central gateway. The vineyard operator reported a 22% reduction in water usage in the following season due to data-driven irrigation scheduling rather than calendar-based scheduling. The system has run through three growing seasons.
Case Study 4: ADS-B Aircraft Tracking Station
The Problem
An aviation enthusiast wanted to build a ground station that would receive ADS-B transponder broadcasts from commercial aircraft, decode the position and telemetry data, feed it to the FlightAware and Flightradar24 networks (which provide free premium accounts in exchange for data), and display a local real-time map of aircraft within 200+ nautical miles.
The Build
A Pi 4 (2GB) with:
- RTL-SDR Blog V3 USB software-defined radio dongle (~$30)
- 1090 MHz bandpass filter to attenuate out-of-band interference
- Colinear vertical antenna built from RG-6 coaxial cable per published instructions — homemade antennas consistently outperform the stub antenna included with SDR dongles for this use case
- Antenna mounted externally at rooftop height
Software: dump1090-fa (FlightAware’s fork) for ADS-B decoding, tar1090 for the local map interface, fr24feed and piaware for feeding the commercial networks.
# Install dump1090-fa
sudo apt install dump1090-fa
# dump1090 decodes incoming 1090MHz signals and outputs
# decoded aircraft positions via network on port 30003 (raw)
# and 8080 (SkyAware web interface)
# tar1090 provides a more feature-rich local map
sudo bash -c "$(curl -L -o - https://github.com/wiedehopf/tar1090/raw/master/install.sh)"
The RTL-SDR dongles run hot and frequency drift as they warm up, causing the initial minutes of operation to have elevated error rates. A USB fan mounted in the enclosure resolved thermal stability. The dongle’s bias-T was enabled to power an LNA (low-noise amplifier) mounted at the antenna base, which added approximately 80 nautical miles to the effective reception range.
Performance
From a suburban location with the antenna at approximately 8 meters above ground:
- Consistent reception out to 200–250 nautical miles
- Peak 400+ simultaneous aircraft tracked during busy afternoon windows
- CPU load on the Pi under decode + both network feeds: approximately 15%
- Uptime: 847 days on last measurement before a power outage
The homemade colinear antenna built for under $10 in cable and connectors outperformed a commercial $80 antenna in A/B testing — a counterintuitive but well-documented result in the SDR community. The geometry of a colinear provides gain on the horizon, which is where distant aircraft are.
Data volume: The station uploaded approximately 1.2GB of ADS-B position data per month to the commercial networks. In exchange, both FlightAware and Flightradar24 provided free premium subscriptions — a $30–40/month combined value.
Extensions
A second RTL-SDR dongle on the same Pi was configured to receive ACARS (Aircraft Communications Addressing and Reporting System) messages on 131.525 MHz — the airline operational communications channel. ACARS messages include weather observations, gate assignments, fuel requests, and maintenance codes. These were decoded with acarsdec and logged to a local database.
Key lesson: A single Pi 4 can comfortably run two independent SDR decode pipelines simultaneously. CPU is rarely the bottleneck — USB bandwidth is. If running more than two RTL-SDR dongles, distribute across two Pis or use a USB hub with individual power switching.
Case Study 5: Museum Interactive Exhibit Controller
The Problem
A small natural history museum needed to replace aging touchscreen kiosk controllers — Windows XP machines inside custom enclosures — that were failing and no longer supportable. The exhibits displayed animated content, responded to proximity sensors, and played audio. Each machine cost $1,800 to replace commercially. The museum had nine such exhibits and a technology budget of $3,000 total.
The Build
Nine Pi 4 (4GB) units, each running Raspberry Pi OS with a custom autostart configuration. Displays were existing 24-inch monitors retained from the original installation.
Each unit ran:
- Chromium in kiosk mode for HTML5/CSS3/JavaScript animated content
- vlc for audio playback, triggered via MQTT messages from sensor events
- A Python service reading from a PIR motion sensor and ultrasonic distance sensor to detect visitor approach
# Exhibit controller daemon
import paho.mqtt.client as mqtt
import RPi.GPIO as GPIO
import time
import subprocess
GPIO.setmode(GPIO.BCM)
PIR_PIN = 17
TRIG_PIN = 23
ECHO_PIN = 24
GPIO.setup(PIR_PIN, GPIO.IN)
GPIO.setup(TRIG_PIN, GPIO.OUT)
GPIO.setup(ECHO_PIN, GPIO.IN)
client = mqtt.Client()
client.connect("192.168.10.1", 1883)
EXHIBIT_ID = "exhibit_03_trilobite"
IDLE_CONTENT = "idle_loop.html"
ACTIVE_CONTENT = "trilobite_interactive.html"
ACTIVE_AUDIO = "/media/exhibits/trilobite_narration.ogg"
state = "idle"
last_motion = 0
ACTIVE_TIMEOUT = 120 # seconds
def get_distance():
GPIO.output(TRIG_PIN, True)
time.sleep(0.00001)
GPIO.output(TRIG_PIN, False)
pulse_start = time.time()
while GPIO.input(ECHO_PIN) == 0:
pulse_start = time.time()
while GPIO.input(ECHO_PIN) == 1:
pulse_end = time.time()
return (pulse_end - pulse_start) * 17150 # cm
def load_page(url):
subprocess.run(["xdotool", "key", "F5"]) # reload current Chromium tab
subprocess.run(["chromium-browser", "--kiosk", url])
def play_audio(path):
subprocess.Popen(["cvlc", "--play-and-exit", path])
def activate():
global state
state = "active"
load_page(ACTIVE_CONTENT)
play_audio(ACTIVE_AUDIO)
client.publish(f"exhibit/{EXHIBIT_ID}/state", "active")
def deactivate():
global state
state = "idle"
load_page(IDLE_CONTENT)
client.publish(f"exhibit/{EXHIBIT_ID}/state", "idle")
try:
while True:
motion = GPIO.input(PIR_PIN)
distance = get_distance()
visitor_present = motion or distance < 150 # within 1.5 meters
if visitor_present:
last_motion = time.time()
if state == "idle":
activate()
if state == "active" and (time.time() - last_motion) > ACTIVE_TIMEOUT:
deactivate()
time.sleep(0.5)
except KeyboardInterrupt:
GPIO.cleanup()
A central Pi 4 on the museum’s network ran an MQTT broker and a simple Node-RED dashboard that showed the state of all nine exhibits in real time — which were active, which were in idle, any sensor faults. This gave staff situational awareness without walking the floor.
Reliability Engineering
Museum environments are hard on electronics: power interruptions from cleaners unplugging units, children pressing reset buttons, displays going to sleep at inopportune moments. The solution:
- Read-only root filesystem (
overlayfs) so that any power interruption left no corrupted state. Writable partition mounted separately for content updates only. - All content pre-cached locally — no network dependency during operation.
- Systemd services configured with
Restart=alwaysandRestartSec=5. - Displays configured at EDID level to disable power management.
- Physical cases built with the SD card slot and USB ports behind a lock panel.
Outcome
Total hardware cost: $2,200 for nine units including cases, sensors, and cabling. Per-unit cost: ~$245 versus $1,800 commercial. The content management system was rebuilt in HTML5/JS and runs faster on the Pi than it did on the original Windows XP machines. Three years of operation with zero exhibit failures attributable to hardware.
Case Study 6: Distributed Weather Station Network
The Problem
A meteorology graduate student needed fine-grained local weather data — temperature inversion layers, wind shear at multiple heights, precipitation granularity across a 5km grid — that commercial weather stations neither provided nor could be economically deployed at the required density. Commercial Davis Vantage Pro stations ran $600+. The student needed fifteen stations.
The Build
Fifteen Pi Zero 2 W units, each in a weatherproof radiation shield (louvered white PVC enclosure), measuring:
- BME280 — temperature, humidity, barometric pressure (I2C, $3)
- Anemometer and wind vane — converted from salvaged Davis stations via ADC
- Rain gauge tipping bucket — GPIO interrupt-based count
- UV index sensor (VEML6075) — I2C
All nodes transmitted over cellular using cheap Huawei E3372 LTE dongles with data-only SIMs. The alternative — LoRaWAN or Wi-Fi — was impractical over a 5km irregular outdoor grid.
import smbus2
import bme280
import time
import requests
import RPi.GPIO as GPIO
port = 1
address = 0x76
bus = smbus2.SMBus(port)
calibration_params = bme280.load_calibration_params(bus, address)
RAIN_PIN = 4
rain_count = 0
BUCKET_MM = 0.2794 # mm per tip for standard tipping bucket
GPIO.setmode(GPIO.BCM)
GPIO.setup(RAIN_PIN, GPIO.IN, pull_up_down=GPIO.PUD_UP)
def rain_tip(channel):
global rain_count
rain_count += 1
GPIO.add_event_detect(RAIN_PIN, GPIO.FALLING,
callback=rain_tip, bouncetime=300)
STATION_ID = "ws_07"
API_ENDPOINT = "https://weather.university.edu/api/v1/readings"
API_KEY = "station_key_here"
while True:
data = bme280.sample(bus, address, calibration_params)
rain_mm = rain_count * BUCKET_MM
rain_count = 0 # reset after reading
payload = {
"station": STATION_ID,
"timestamp": time.time(),
"temperature_c": round(data.temperature, 2),
"humidity_pct": round(data.humidity, 1),
"pressure_hpa": round(data.pressure, 2),
"rain_mm_5min": round(rain_mm, 2),
}
try:
r = requests.post(API_ENDPOINT,
json=payload,
headers={"X-API-Key": API_KEY},
timeout=10)
r.raise_for_status()
except requests.RequestException as e:
# Log locally and retry on next cycle
with open("/var/log/ws_errors.log", "a") as f:
f.write(f"{time.time()}: {e}\n")
time.sleep(300) # 5-minute interval
Data was aggregated in a university server running TimescaleDB (PostgreSQL with time-series extensions), visualized with Grafana, and exported in NetCDF format for compatibility with standard meteorological analysis tools.
Power
Most stations ran from grid power where available. Four remote stations used 20W solar panels with 20Ah LiPo battery banks. Power management was achieved via systemd sleep timers — the Pi powered down for 4 minutes out of every 5, waking on RTC alarm to take a reading and transmit. Cellular modem was cut from power via a GPIO-controlled relay when not transmitting.
Data Quality
BME280 sensors in enclosed radiation shields suffer from self-heating — residual heat from the Pi warms the enclosure, biasing temperature readings 0.5–1.5°C high under calm, sunny conditions. Correction factors were derived by co-locating each station temporarily with a calibrated reference instrument. The correction was applied in the ingestion pipeline, not the firmware — maintaining raw data integrity.
Outcome
Fifteen stations at a fraction of the cost of commercial deployment. The dataset captured a measurable temperature inversion event that the nearest official weather station — 8km away — showed no trace of. The data appeared in a published paper on urban heat island microstructure.
Case Study 7: Warehouse Inventory Tracking With Computer Vision
The Problem
A small parts distributor warehouse was losing 3–5 hours per week to manual inventory counts on a set of sixty shelf positions. Items were added and removed throughout the day without being logged, causing stock discrepancies. A commercial RFID system was quoted at $15,000 for installation. The operations manager wanted a technology solution under $2,000.
The Build
Eight Pi 4 (4GB) units with Pi Camera Module 3 (autofocus, 12MP), mounted to view shelf sections from fixed overhead positions. Each unit ran a computer vision pipeline using TensorFlow Lite with a custom-trained object detection model based on MobileNetV2, trained on photos of the warehouse’s actual SKUs.
The training dataset was assembled by photographing approximately 200 distinct parts at varied orientations and lighting conditions — around 50 images per SKU — and augmented with rotation, brightness variation, and synthetic occlusion to improve robustness.
import tflite_runtime.interpreter as tflite
import numpy as np
import cv2
import paho.mqtt.client as mqtt
import json
import time
MODEL_PATH = "/models/warehouse_detector.tflite"
LABELS_PATH = "/models/labels.txt"
CAMERA_ID = "shelf_cam_03"
interpreter = tflite.Interpreter(model_path=MODEL_PATH)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape'] # e.g. [1, 300, 300, 3]
with open(LABELS_PATH) as f:
labels = [line.strip() for line in f.readlines()]
client = mqtt.Client()
client.connect("192.168.1.10", 1883)
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 960)
CONFIDENCE_THRESHOLD = 0.65
SCAN_INTERVAL = 60 # seconds
def run_inference(frame):
img = cv2.resize(frame, (input_shape[2], input_shape[1]))
img = np.expand_dims(img, axis=0).astype(np.uint8)
interpreter.set_tensor(input_details[0]['index'], img)
interpreter.invoke()
boxes = interpreter.get_tensor(output_details[0]['index'])[0]
classes = interpreter.get_tensor(output_details[1]['index'])[0]
scores = interpreter.get_tensor(output_details[2]['index'])[0]
detections = []
for i, score in enumerate(scores):
if score >= CONFIDENCE_THRESHOLD:
detections.append({
"sku": labels[int(classes[i])],
"confidence": float(score),
"box": boxes[i].tolist()
})
return detections
while True:
ret, frame = cap.read()
if not ret:
continue
detections = run_inference(frame)
# Count unique SKUs detected
sku_counts = {}
for d in detections:
sku_counts[d['sku']] = sku_counts.get(d['sku'], 0) + 1
payload = {
"camera": CAMERA_ID,
"timestamp": time.time(),
"counts": sku_counts
}
client.publish("warehouse/inventory/update", json.dumps(payload))
print(f"Published: {sku_counts}")
time.sleep(SCAN_INTERVAL)
cap.release()
A central server aggregated detection counts from all eight cameras, reconciled them against a master inventory database, and flagged discrepancies greater than two units for manual verification.
Challenges
Inference on Pi 4 without a hardware accelerator ran at approximately 3–4 frames per second — adequate for a once-per-minute batch scan, inadequate for real-time tracking. Items partially obscured by each other (stacked, overlapping) had detection rates around 60%, versus 94% for fully visible items. Shadow variation from skylights caused false negatives until exposure compensation was tuned.
Adding a Google Coral USB Accelerator ($60) increased inference speed to 30+ FPS and allowed confidence thresholds to be raised, reducing false positives.
Outcome
Build cost for eight camera stations: $1,840 including Coral accelerators. Inventory discrepancy rate dropped from approximately 8% to below 2% in the first month. Manual count time reduced from 3–5 hours/week to 30-minute weekly audits on flagged discrepancies only.
Case Study 8: Remote Infrastructure Monitoring for a Mountain Hut
The Problem
An alpine club maintained a mountain refuge hut at 2,800m elevation — unmanned outside of the summer season. Burst pipes, generator failures, and intruder events had caused significant damage over three winters. The club needed remote monitoring: temperature, intrusion detection, generator status, water pipe temperature, and ideally camera coverage. There was no grid power and no wired connectivity. Satellite internet was cost-prohibitive.
The Build
A Pi Zero 2 W running on a 100Ah AGM battery charged by two 60W solar panels and a 400W wind turbine, managed by a Victron MPPT charge controller. The Pi communicated via a RockBLOCK 9603 Iridium satellite modem — a global two-way data link that works anywhere on Earth at approximately $0.10 per 50-byte message.
The system was designed around extreme data frugality — Iridium costs scale with message volume. All telemetry was packed into a compact binary structure transmitted once per hour under normal conditions, and once per five minutes when an alarm threshold was crossed.
import struct
import time
import serial
import RPi.GPIO as GPIO
# RockBLOCK connected via UART at /dev/ttyUSB0
IRIDIUM_PORT = "/dev/ttyUSB0"
IRIDIUM_BAUD = 19200
# Sensor definitions
TEMP_PINS = [4, 17, 27, 22] # DS18B20 sensors
PIR_PIN = 23
def read_ds18b20(device_path):
try:
with open(device_path, 'r') as f:
lines = f.readlines()
pos = lines[1].find('t=')
return float(lines[1][pos+2:]) / 1000.0
except:
return -99.0 # sentinel for read failure
def pack_telemetry(temps, motion, battery_mv, solar_w):
# Pack into 20 bytes: 4 temps (float16) + motion (bool) + battery + solar
return struct.pack('>4eB HH',
temps[0], temps[1], temps[2], temps[3],
int(motion),
int(battery_mv),
int(solar_w))
def send_iridium(data_bytes):
with serial.Serial(IRIDIUM_PORT, IRIDIUM_BAUD, timeout=30) as ser:
hex_data = data_bytes.hex().upper()
ser.write(f"AT+SBDWB={len(data_bytes)}\r".encode())
time.sleep(1)
ser.write(data_bytes)
time.sleep(1)
ser.write(b"AT+SBDIXA\r") # initiate SBD session
response = ser.read(256)
return b"SBDIX:0" in response or b"SBDIX:1" in response
motion_detected = False
def motion_callback(channel):
global motion_detected
motion_detected = True
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIR_PIN, GPIO.IN)
GPIO.add_event_detect(PIR_PIN, GPIO.RISING, callback=motion_callback)
import glob
temp_devices = [glob.glob(f'/sys/bus/w1/devices/28*/w1_slave')[i]
for i in range(4)]
NORMAL_INTERVAL = 3600 # 1 hour
ALARM_INTERVAL = 300 # 5 minutes
ALARM_TEMP_LOW = -5.0 # pipes at risk below this
ALARM_TEMP_HIGH = 35.0 # fire risk above this
while True:
temps = [read_ds18b20(d) for d in temp_devices]
battery_mv = 12600 # read from Victron via Modbus (simplified)
solar_w = 45 # read from Victron
alarm = (any(t < ALARM_TEMP_LOW or t > ALARM_TEMP_HIGH for t in temps)
or motion_detected
or battery_mv < 11500)
payload = pack_telemetry(temps, motion_detected, battery_mv, solar_w)
success = send_iridium(payload)
if success:
motion_detected = False
interval = ALARM_INTERVAL if alarm else NORMAL_INTERVAL
time.sleep(interval)
On the receiving end, a small Python service decoded incoming Iridium messages via the RockBLOCK web API, stored readings in a database, and sent SMS alerts for alarm conditions.
Power Budget
The Iridium modem draws 145mA peak during transmission (approximately 20 seconds per session). The Pi Zero 2 W at idle draws 100mA. The AGM battery, wind turbine, and solar panels were sized to maintain operation through seven consecutive cloudy, windless days — the worst-case winter scenario in the region.
A critical design decision: the Pi was configured to fully power down between readings using the same RTC-alarm-MOSFET approach as the vineyard project. Without this, the continuous 100mA draw would deplete the battery in under 40 days with no charge input.
Outcome
The system detected a pipe freezing event in its first winter — temperatures in the unheated crawl space dropped below -8°C over a three-day cold spell. The alarm triggered the caretaker to arrange emergency heating before a burst occurred. In the club’s estimate, this single event justified the entire build cost. The system has operated for two winters with one battery replacement.
What These Cases Have in Common
Across these eight deployments, several patterns recur consistently:
Power is almost always the hardest problem. Whether it’s USB bus saturation on a router, thermal instability on an SDR dongle, or solar budget calculations for an alpine hut, power — how it’s supplied, how it’s measured, and how it’s conserved — consumes more engineering time than any other single constraint.
The Pi is almost never the bottleneck. In every case, some adjacent component — the SD card, the USB adapter, the radio module, the sensor — imposed the real performance ceiling. Routing performance was limited by USB bus bandwidth. Vision inference was limited by lack of hardware acceleration. Telemetry cost was limited by Iridium’s pricing model. Designing around the adjacent constraint is the actual engineering work.
Reliability requires treating the Pi like a server, not a desktop. Read-only filesystems, watchdog timers, systemd service supervision, and proper power supply sizing are not optional for unattended deployments. Systems that are built with these from the start survive. Systems that are not require maintenance trips.
The cost advantage is real but not absolute. In every commercial comparison, the Pi-based build was 80–90% cheaper. That gap is real. But it does not account for engineering time, which can be substantial on a first deployment. The second deployment of the same design is fast and cheap. The first is an education.
Scale compounds the advantage. A single museum kiosk at $245 versus $1,800 saves $1,555. Nine museum kiosks save $14,000. A fifteen-station weather network at $800 would have cost $90,000 commercially. The per-unit cost of the Pi hardware is low enough that the economic argument becomes overwhelming at any meaningful scale.
The Raspberry Pi has earned its place as a serious engineering platform not because it is cheap — though it is — but because it is capable, well-documented, and has a decade of community-solved problems behind it. For anyone operating in constrained environments with real instrumentation requirements, the question is rarely whether a Pi can solve the problem. It is which Pi, running what, wired to what — and whether you’ve sized the power supply properly.
All deployments described reflect real implementation patterns documented in technical communities, adapted for illustration. Specific sensor models, library versions, and code patterns reflect typical practice as of 2025–2026.