61 lines
2.8 KiB
Markdown
61 lines
2.8 KiB
Markdown
# Project Roadmap & Future Tasks
|
|
|
|
## 🧭 Current Decision Point
|
|
We are at a crossroads. The current app is stable, and plans for expansion are ready.
|
|
|
|
### 🔘 Option A: Polish Current App (Low Priority)
|
|
Focus on minor unimplemented features.
|
|
* [ ] **Micro-interactions (Priority C)**:
|
|
* Tab switching animations (fade/slide)
|
|
* Dialog entrance animations
|
|
* Badge unlock celebrations
|
|
* [ ] **Coach Mark Fixes**: Verify/Fix if tutorial overlay persists incorrectly.
|
|
* [ ] **Image Compression**: Refactor to use `image` package instead of simple file copy.
|
|
|
|
### 🔘 Option B: Synology Infrastructure (High Stability)
|
|
Establish the data bunker and security.
|
|
* [ ] **Phase 2A: Container Manager Setup**:
|
|
* Setup `posimai-db` (Postgres) container.
|
|
* Setup `ai-proxy` (FastAPI) container.
|
|
* Setup `cloudflared` tunnel for secure remote access.
|
|
* [ ] **Phase 2B: Automation**:
|
|
* Implement nightly batch processing (e.g., AI Recommendations).
|
|
|
|
### 🔘 Option C: Incense App Expansion (New Feature)
|
|
Build the "Posimai Core" platform.
|
|
* [ ] **Core Refactoring**: Extract Gemini, Camera, Hive logic to `lib/core`.
|
|
* [ ] **Flavor Setup**: Configure build flavors for Sake vs Incense.
|
|
* [ ] **Incense App MVP**: Implement `ScentStats` and Zen Mode.
|
|
|
|
---
|
|
|
|
## 💡 Architecture FAQ
|
|
|
|
### Q1. Can Synology handle AI Analysis locally?
|
|
**Short Answer: Not recommended for Image/Vision tasks.**
|
|
|
|
* **Reason**: Standard Synology NAS devices (DS220+, DS923+, etc.) lack powerful GPUs (Graphics Processing Units).
|
|
* **Performance**: Running a "Vision LLM" (like Llama 3.2 Vision) on a CPU-only NAS would take **30-120 seconds per image**, compared to **1-3 seconds** with Gemini API.
|
|
* **Exception**: Unless you have a specific AI-focused device (e.g., Synology DVA series or a NAS with a PCIe GPU added), it is not practical for user experience.
|
|
|
|
### Q2. How to avoid high Gemini Token usage?
|
|
**Strategy 1: Use the Free Tier (Recommended)**
|
|
* **Gemini 1.5 Flash** offers a generous free tier:
|
|
* 15 requests per minute (RPM).
|
|
* 1,500 requests per day (RPD).
|
|
* This is sufficient for personal use and small-scale testing.
|
|
|
|
**Strategy 2: Caching (Architecture)**
|
|
* **Implementation**: Store the AI Analysis result in the local DB (Postgres on Synology).
|
|
* **Logic**: Before sending an image to Gemini, check if this exact image hash has been analyzed before. (Only works for exactly identical files).
|
|
* **Note**: For new photos, you cannot avoid the first analysis.
|
|
|
|
**Strategy 3: Local Proxy Limits**
|
|
* The current `ai-proxy` already implements a "Rate Limit" (10/day). This prevents runaway token usage/cost.
|
|
|
|
---
|
|
|
|
## 🗺️ Long-term Vision
|
|
* **Posimai Core**: A single codebase powering multiple collection apps.
|
|
* **Hybrid Cloud**: Google for "Brain" (AI), Synology for "Memory" (DB/Backup).
|