/* Exercise: Web Crawler In this exercise you'll use Go's concurrency features to parallelize a web crawler. Modify the Crawl function to fetch URLs in parallel without fetching the same URL twice. Hint: you can keep a cache of the URLs that have been fetched on a map, but maps alone are not safe for concurrent use! */ package main import ( "fmt" ) type Fetcher interface { // Fetch returns the body of URL and // a slice of URLs found on that page.