Predicting Overnight Temperatures with Lag-Llama: Saving My Fall Plant

Introduction

As the leaves turn golden and the air grows crisp, fall brings a unique charm to New York. But with the beauty of autumn comes the challenge of protecting my new orange mum plant from the freezing temperatures. This fall, I decided to use cutting-edge technology to predict overnight lows and ensure my plant survives the frost.

In this blog, I’ll walk you through how I used Lag-Llama, an open-source foundation model for time series forecasting, to predict temperatures and make informed decisions about when to bring my plant indoors. Whether you’re a gardening enthusiast or a data science geek, this story will show you how AI can solve real-world problems in creative ways.

The Problem: Protecting My Plant from Frost

Fall in New York is beautiful but unpredictable. As temperatures drop, my new orange mum plant, which thrives outdoors, faces the risk of frost damage. If the temperature dips below freezing, the plant could die. To prevent this, I needed a way to predict overnight lows accurately and decide when to bring the plant indoors.

This is where time series forecasting comes in. By analyzing historical temperature data, I could predict future temperatures and take proactive steps to protect my plant.

Gathering the Data: Hourly Temperatures in New York

The first step was to gather data. I collected hourly temperature readings for New York over several weeks, focusing on October and November. This data, sourced from ACS Web Services, provided the foundation for my predictions.

However, the dataset had some missing values. To address this, I used interpolation to fill in the gaps, ensuring a complete time series for accurate forecasting.

Introducing Lag-Llama: A Foundation Model for Time Series Forecasting

Traditional time series models like ARIMA require extensive training on specific datasets. But Lag-Llama, an open-source foundation model, works differently.

Trained on large-scale time series data, Lag-Llama can generate forecasts without additional training, much like how large language models (LLMs) can generate text without task-specific fine-tuning. Built on the Llama architecture, Lag-Llama uses lag features—past data points—to make predictions, combining the power of transformers with traditional forecasting techniques.

Setting Up the Environment: Tools and Libraries

To run the Lag-Llama model, I used IBM Watsonx.ai Studio, but you can use any environment that supports Python. Here’s what I did:

  1. Cloned the Lag-Llama GitHub repository.
  2. Installed pre-trained model weights from Hugging Face.
  3. Imported necessary libraries, including GluonTS, a PyTorch-based library for time series forecasting.

Preparing the Data: Cleaning and Interpolation

Before making predictions, I cleaned the data by interpolating missing values. This ensured a smooth time series without gaps. The dataset showed a clear trend of decreasing temperatures as fall progressed, setting the stage for accurate forecasting.

Making Predictions: Zero-Shot Forecasting with Lag-Llama

With the data ready, I configured the model:

  • Prediction Length: 8 hours (overnight temperatures).
  • Context Length: 1 week (looking back at past data).

Using the Lag-Llama predictor, I generated forecasts for late November, when the first frost typically occurs. The model produced probabilistic forecasts, showing not just the predicted temperature but also the 50% and 90% prediction intervals, indicating the model’s confidence.

Evaluating the Forecasts: Accuracy and Insights

To assess the model’s performance, I compared the forecasts to actual temperatures using mean absolute percentage error (MAPE). The results were promising:

  • On November 24th, the model accurately predicted temperatures within the 50% prediction interval.
  • On November 28th, the forecast was less accurate, but the actual temperature still fell within the 90% prediction interval.

By following my guideline—bringing the plant indoors if the 50% interval indicated frost—I successfully protected my plant.

Results: Did My Plant Survive?

Thanks to Lag-Llama, my orange mum survived the fall frosts. The model’s predictions, while not perfect, were reliable enough to guide my decisions. This experiment showed me the potential of foundation models for real-world applications.

Why Foundation Models Are the Future of Time Series Forecasting

While generative AI and LLMs have dominated the spotlight, foundation models like Lag-Llama are paving the way for advancements in time series forecasting. By leveraging large-scale training and transformer architectures, these models offer:

  • Zero-shot forecasting: No need for task-specific training.
  • Probabilistic predictions: Insights into model confidence.
  • Scalability: Applicable to diverse datasets and industries.

Conclusion: AI Meets Everyday Life

This project was a perfect example of how AI can solve everyday problems. By using Lag-Llama to predict overnight temperatures, I saved my plant and gained a deeper appreciation for the power of foundation models.

Whether you’re a gardener, a data scientist, or just someone curious about AI, I hope this story inspires you to explore the possibilities of time series forecasting. After all, the future of AI isn’t just about big breakthroughs-it’s also about small, meaningful applications that make life better.

 

Leave a Reply

Your email address will not be published. Required fields are marked *