AuraMarketing
Elite Member
Is there any tool (software/website) available that I can use to get the content (only the text) of a website.
I don't want to use winhttrack and download the complete website.
The tool should parse the article and store it in one or multiple txt/doc file from the URLs given
Additional feature: It may be able to fetch the URLs in the article and get their data too (same domain).
I tried using Expired article hunter on my list of domains (not the expired ones). But it is not generating any result. I think because it uses web.archive links.
Something similar I found >> https://lateral.io/docs/article-extractor
But it takes 1 URL at a time, and I need to copy the text. Also, it does not parse data from tables.
I don't want to use winhttrack and download the complete website.
The tool should parse the article and store it in one or multiple txt/doc file from the URLs given
Additional feature: It may be able to fetch the URLs in the article and get their data too (same domain).
I tried using Expired article hunter on my list of domains (not the expired ones). But it is not generating any result. I think because it uses web.archive links.
Something similar I found >> https://lateral.io/docs/article-extractor
But it takes 1 URL at a time, and I need to copy the text. Also, it does not parse data from tables.