I know how to use scrapebox to get the meta data from a list of websites. I know how to use Parsehub to get specific details from similar pages. I can use spin software to retrieve text from sites and save into separate files. But i cant find a tool that searches a list of random websites and returns just "text" from the page (e.g. all chunks of text that have more than 50 characters before running into html code) in a txt/csv format. Anyone know one?