This is an Expo project created with create-expo-app.
-
Install dependencies
npm install
-
Start the app
npx expo start
In the output, you'll find options to open the app in a
- development build
- Android emulator
- iOS simulator
- Expo Go, a limited sandbox for trying out app development with Expo
You can start developing by editing the files inside the app directory. This project uses file-based routing.
When you're ready, run:
npm run reset-projectThis command will move the starter code to the app-example directory and create a blank app directory where you can start developing.
To learn more about developing your project with Expo, look at the following resources:
- Expo documentation: Learn fundamentals, or go into advanced topics with our guides.
- Learn Expo tutorial: Follow a step-by-step tutorial where you'll create a project that runs on Android, iOS, and the web.
Join our community of developers creating universal apps.
- Expo on GitHub: View our open source platform and contribute.
- Discord community: Chat with Expo users and ask questions.
The project includes a web scraping script to collect scam education articles from various Singapore-based websites. The script is located in scripts/scrapeEducationContent.js.
- Ensure you have the required environment variables in your
.envfile:
EXPO_PUBLIC_SUPABASE_URL=your_supabase_project_url
EXPO_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
- The script requires the following dependencies (already included in package.json):
- axios: For making HTTP requests
- cheerio: For parsing HTML content
- dotenv: For loading environment variables
The scraped articles are stored in the educational_articles table with the following structure:
- id: UUID (primary key)
- title: Text (article title)
- content: Text (article content)
- image_url: Text (URL to article image or placeholder)
- source_url: Text (original article URL)
- created_at: Timestamp
- updated_at: Timestamp
To run the web scraping script:
npm run scrapeThe script will:
- Scrape articles from configured sources
- Check for duplicates before insertion
- Insert new articles into the Supabase database
- Log the number of articles inserted and skipped
To add a new source:
- Create a new scraper function following the pattern of
scrapeScamShield() - Add the new scraper to the
main()function - Ensure proper error handling and logging
- The script uses appropriate request headers to avoid being blocked
- Row Level Security (RLS) is enabled on the database table
- Only the service role can insert new articles
- All users can read articles
The script includes comprehensive error handling:
- HTTP request errors
- HTML parsing errors
- Database operation errors
- Duplicate article detection
Regular maintenance tasks:
- Update User-Agent strings if needed
- Monitor for changes in website structures
- Add new sources as they become available
- Review and update scraping patterns if site layouts change