A real-life example of this and how to sustain it
At a Burn event, I actually met someone who does exactly this.
In his case it's in the form of recording a video message from the person themselves, to be sent to themselves or some other email address at some point in the future. I recorded my own video to be emailed to someone in 5 years' time and CCed to me so I know when to follow up when they're supposed to get it.
The sending is powered by a script he wrote, which runs continuously as a service on AWS-hosted cloud services and checks periodically whether it's time to send- it is possible to get really cheap or even free compute power and long term storage on their platform as AWS and other cloud providers also provide these environments for as people often want to test their services first or test their code on a free scaled-down environment before they consider deploying it live to a customer audience.
In this case, you would be handling smaller amounts of data and not the scale of requests of larger services so it would fit into that bracket of lower demand for storage space and would likely also be inexpensive unless it grows to the size of something like a Google service, as people would not be leaving as many messages for themselves in the far future as they would for, say, reminders in a calendar app over the next month.
But the use of cloud services, aside from optimizing the costs at which it’s available also makes it easy to flexibly scale up and down as demand dictates.
Large cloud service providers like AWS, Google, Azure etc also provide extra prevention against data loss through being able to utilize redundant storage across multiple servers, in case of potential damage to the storage hardware. They even have geo-redundant storage, so you can ensure your data duplicates are stored in geographic locations which are far apart from each other even on different continents, so that in the case of a natural disaster taking down an entire data centre your data would still be safe.
Geo-redundancy would be particularly relevant when considering you would be needing long-term storage (particularly with the increasing dangers of extreme weather due to climate change).
In terms of the service itself, the person hosting the service could get notified if their script stops running and anything goes wrong on AWS it would also notify them, so you would likely not have to manually check on it.
AWS also would have analytics for checking for any failures of requests, failed sends and tracking other errors.
If you wanted an extra failsafe for the running for the script itself it'd be easy to write a "heartbeat" script (I’ve done it before as a means of monitoring a service’s SLA uptime) that checks as frequently as you like if your this service is still running and notifies you if it stops responding.
In the case of something failing it'd be easy to have a batch send mechanism with this early alert where, if the service stopped running for some reason, it would consult the latest successfully sent entry times and trigger the remaining notifications to be sent as a batch.
Another way to ensure that the sending definitely goes through would be to have every send trigger more than once, or reattempt multiple times, but idempotently, meaning that while the request to send can go through multiple times, only one successful send will be allowed (meaning the person will not get spammed with multiple emails). This checking should cut down on potential failures due to network errors meaning there would be less likely to be cause for alert if the service was down only briefly. The heartbeat script itself or a service triggered by it could restart the service if it does not respond.
You could have geo-redundancy for the heartbeat service as well, and more than one instance of both that and the main service running as well, still idempotently, this way not only is your service unlikely to go down due to there being more than one instance of it running, it would itself be monitored by multiple heartbeats as well as the cloud service provider's own monitoring services.
Compatibility for future-proofing data porting
Making the actual data easily exportable to another platform if it is stored as an iCal for example for calendar compatibility, or in XML/JSON/YML/CSV file for general compatibility would help with future-proofing because even if you migrate to another system, the record of all your saved future dates is there in a widely supported format so even if those for types get deprecated in future there are already many easy ways to convert them and many platforms supporting them so that'd be a quick and easy data format conversion, especially as the latter file types are specifically designed to be universally acceptable formats.
With this portability of data via an easily accessible universal format, you wouldn’t have to worry about whether their current tools and apps will still be working in the future, you’d be able to upgrade your script to newer technology and hardware as frequently (and possibly as infrequently) as you like!
Please leave the feedback on this idea