The spreadsheet said 18 months
I built a spreadsheet. It was beautiful. Sleep current times sleep hours, plus transmit current times transmit time, divided by battery capacity. The math was clean, the numbers were optimistic, and the conclusion was clear: 18 months on 2 AA batteries, easy.
The first field deployment lasted 3 months.
This is the story of where those 15 months went, and the painful education in power budgeting that followed. If you're building battery-powered LoRaWAN nodes, this might save you a few field trips.
The battery you think you have
Two AA lithium batteries. The datasheet says 3000mAh each at 1.5V. Running through a buck-boost to 3.3V, that gives you roughly 4500mAh equivalent to play with. Sounds generous.
Here's your first lesson: that 3000mAh number is measured at room temperature (25 degrees C) with a constant discharge current optimized for the battery chemistry. Your sensor node is in a field. In winter. At 0 degrees C, lithium AA capacity drops about 20%. At -10 degrees C, you're losing 30% or more. And battery self-discharge increases with temperature cycling, so those hot summer days followed by cold nights are quietly eating your capacity even when the node is asleep.
Your 4500mAh budget just became 3600mAh. We haven't even turned the radio on yet.
Sleep is not free
The datasheet for the microcontroller says 1.2 microamps in deep sleep. Great. The LoRa radio datasheet says 0.2 microamps in sleep mode. Even better. Total sleep current: 1.4 microamps. At that rate, you could sleep for 29 years on those batteries.
I measured the actual board. 12 milliamps.
Where did the other 11,998.6 microamps come from? Everywhere.
The voltage regulator. That LDO you picked because it was cheap and available had a quiescent current of 50 microamps. Doesn't sound like much until you realize that's 3.5% of your sleep budget right there. Switched to a 300nA Iq regulator. Problem partially solved.
The RTC. The external RTC for accurate wakeup timing drew 1.8 microamps, not the 0.9 microamps in the datasheet. The datasheet number was for the bare IC. With the crystal oscillator and decoupling caps on my board layout, reality was double.
The sensor. The soil moisture sensor had a "sleep" mode that still drew 800 microamps. The "off" mode required cutting power via a MOSFET switch, which I hadn't included in the design. Revision two of the PCB got a high-side P-FET for sensor power control.
The pull-up resistors. Two 10K pull-ups on the I2C bus, connected to 3.3V. That's 330 microamps each, 660 microamps total, 24/7. Moved to internal pull-ups that only activate during sensor reads.
After three board revisions and a lot of time with a micro-current meter, I got sleep current down to 3.4 microamps. That's still nearly three times the datasheet number for the MCU alone, but it's real.
The receive windows nobody budgets for
Here's where most LoRaWAN battery spreadsheets go catastrophically wrong.
Everyone remembers to budget for transmit. It's dramatic: 120mA for 52ms at SF7, 14dBm. That's a burst you can see on the scope. It feels expensive. But 120mA for 52ms is only 1.73 microamp-hours per transmission. At 96 transmissions per day (15-minute interval), that's 166 microamp-hours per day. About 1.5% of your daily budget. Transmit is cheap.
The receive windows are the silent killer.
LoRaWAN Class A devices open two receive windows after every uplink. RX1 opens 1 second after the TX ends. RX2 opens 2 seconds after TX. During these windows, the radio is in receive mode, drawing about 14mA. If no downlink arrives, each window stays open for up to 1 second (the RX timeout).
That's 14mA for up to 2 seconds, twice per uplink. Per transmission, that's up to 7.78 microamp-hours, more than four times the transmit cost. At 96 transmissions per day: 747 microamp-hours just for listening to silence. That's over 5% of your daily budget, spent doing nothing useful.
And this is for Class A, the most power-efficient class. Class B and C are worse by orders of magnitude.
The math that actually works
Let me walk through the real power budget for the agriculture sensor node after optimization.
Sleep (23.87 hours/day): 3.4 microamps x 23.87 hours = 81.2 microamp-hours
Sensor read (96 reads/day, 200ms each including ADC warmup): 8mA x 0.0000556 hours x 96 = 42.7 microamp-hours
TX bursts (96/day, 52ms each at 120mA): 120mA x 0.0000144 hours x 96 = 166.4 microamp-hours
RX windows (96/day, worst case 2s total at 14mA): 14mA x 0.000556 hours x 96 = 747.3 microamp-hours
Total daily consumption: ~1,037 microamp-hours/day
With a realistic battery capacity of 3,600mAh (accounting for temperature derating): 3,600,000 / 1,037 = 3,471 days. That's 9.5 years.
Wait, that can't be right. Let me check... Actually, after the optimization, the math does work out to well over the 14-month target. The original 3-month lifetime was because sleep current was 12mA, not 3.4 microamps. At 12mA sleep current, you burn 286,440 microamp-hours per day just sleeping. That gives you 12.6 days. The actual 3 months we observed was because the node wasn't sleeping continuously. It was waking, failing to read the sensor, and going back to sleep in a tight loop that averaged out to about 3mA.
The lesson: sleep current dominates everything. Getting sleep current right is worth more than any other optimization.
The optimization that actually mattered
The biggest single improvement wasn't hardware. It was the receive window strategy.
The original firmware used confirmed uplinks for every transmission. That means the network server sends an ACK in the RX1 window, and the node waits for it. If the ACK doesn't arrive in RX1, it waits for RX2. If neither window delivers an ACK, the node retransmits. Up to 8 times.
Switched to unconfirmed uplinks for routine sensor data. The soil moisture reading from 15 minutes ago doesn't need guaranteed delivery. If one reading is lost, the next one arrives in 15 minutes. Confirmed uplinks are reserved for critical alerts only (battery low, sensor fault, threshold exceeded).
Also reduced the RX2 window timeout from 1 second to 500ms. The network server responds quickly if it's going to respond at all. A 500ms window catches 99.7% of downlinks while cutting receive energy nearly in half.
The field vs the bench
The final humbling lesson: bench testing is necessary but insufficient.
On the bench at 22 degrees C with a fresh battery and a stable power supply, the node performed exactly as calculated. In the field, three additional factors appeared:
Antenna mismatch. The PCB antenna was tuned for 868MHz in free space. Buried 5cm in soil with a cable to the sensor, the VSWR degraded, and the PA drew more current to compensate. TX current went from 120mA to 140mA.
Retransmissions. Even with unconfirmed uplinks, the LoRaWAN MAC layer retransmits if the duty cycle allows and the ADR algorithm decides you need a higher spreading factor. SF7 takes 52ms. SF12 takes 1.8 seconds. At 120mA. That one retransmission at SF12 costs as much as 35 SF7 transmissions.
Clock drift. The RTC crystal drifted with temperature, causing the node to wake up slightly early or late. The wakeup routine included a calibration step that added 15ms of active time per cycle. Small, but it adds up over 96 cycles per day.
The real lesson
Battery math is not hard. Battery math that matches reality is very hard. The gap between datasheet numbers and field measurements is where your battery life goes to die.
Measure everything. Trust nothing from a datasheet without verification on your actual board, in your actual environment, at your actual operating temperature. Budget for the receive windows. Budget for temperature derating. Budget for the things you haven't thought of yet, because there will be things you haven't thought of.
And maybe budget for an extra site visit or two. The field has a way of humbling your spreadsheet.