Recently Browsing 0
- No registered users viewing this page.
Latest Updated Files
-
VlogLab - Online Vlogging Platform
- 2 Downloads
- 0 Comments
-
SayHi Social- (Timeline, chat, Live,Instagram,Reels,Facebook,Twitter,Threads, TikTok),
- 344 Downloads
- 3 Comments
-
Zoro - Automated Anime Streaming Platform
Zoro - Automated Anime Streaming Platform
499.00 EUR
- 0 Purchases
- 2 Comments
-
Fixed TOC - table of contents for WordPress(wp) plugin
- 0 Downloads
- 0 Comments
-
Random Image Generator Website PHP Script
- 11 Downloads
- 0 Comments
-
CloudOffice - Multipurpose Office Suite on the Cloud
- 13 Downloads
- 0 Comments
-
Madara – Responsive and modern WordPress theme for manga sites
- 680 Downloads
- 5 Comments
-
Max Addons Pro for Bricks Builder
- 1 Downloads
- 0 Comments
-
External Importer Pro By KeywordRush
- 0 Downloads
- 0 Comments
-
SureMembers - Sell and Grow your Membership Site with Ease
- 0 Downloads
- 0 Comments
-
PublishPress Revisions Pro
- 1 Downloads
- 0 Comments
-
Ultimate Member MailChimp Addon
- 0 Downloads
- 0 Comments
-
Fixed TOC - table of contents for WordPress(wp) plugin
- 0 Downloads
- 0 Comments
-
ChatPion add on Google Sheet Integration & HTTP API Integration
- 8 Downloads
- 0 Comments
-
Car2Go - One Stop Ride Share Platform | User Web App | Driver Web App | Admin Panel (MERN)
- 3 Downloads
- 0 Comments
-
Chrome Extension for Premium URL Shortener - Link Shortener, Bio Pages & QR Codes
- 11 Downloads
- 1 Comments
-
AiGen - All-in-One AI Generation Tool - Artificial Intelligence
- 9 Downloads
- 0 Comments
-
Accounting and Bookkeeping for Perfex CRM
- 4 Downloads
- 0 Comments
-
Smart TMS SaaS - Tailor Management System
- 5 Downloads
- 0 Comments
-
Tradexpro Exchange - Crypto Buy Sell and Trading platform, ERC20 and BEP20 Tokens Supported
- 1,105 Downloads
- 15 Comments
View File
Megalo Web Services - Screaming Frog SEO Spider
Megalo Web Services is proud to bring you the powerful Screaming Frog SEO Spider
The License is good until October 2024
Doniaweb
96C4989447-1726998484-CB5127711D
We’re delighted to announce Screaming Frog SEO Spider version 19.2, codenamed internally as ‘Peel’.
This update contains a number of significant updates, new features and enhancements based upon user feedback and a little internal steer.
Let’s take a look at what’s new.
1) Updated Design
While subtle, the GUI appearance has been refreshed in look and feel, with crawl behaviour functions (crawl a subdomain, subfolder, all subdomains etc) moved to the main nav for ease.
These options had previously been within the configuration, so this makes them accessible to free users as well.
There’s now alternate row colours in the main tables, updated icons and even the Twitter icon and link have been removed (!). While the UX, tabs and filters are much the same, the configuration has received an overhaul.
2) Unified Config
The configuration has been unified into a single dialog, with links to each section. This makes adjusting the config more efficient than opening and closing each separately. The naming and location of config items should be familiar to existing users, while being easier to navigate for new users.
There’s been a few small adjustments, such as saving and loading configuration profiles now appearing under ‘Configuration’, rather than the ‘File’ menu.
System settings such as user interface, language, storage mode and more are available under ‘File > Settings’, in their own unified configuration.
You can also ‘Cancel’ any changes made by using the cancel button on the configuration dialog.
3) Segments
You can now segment a crawl to better identify and monitor issues and opportunities from different templates, page types, or areas of priority.
The segmentation config can be accessed via the config menu or right-hand ‘Segments’ tab, and it allows you to segment based upon any data found in the crawl, including data from APIs such as GA or GSC, or post-crawl analysis.
You can set up a segment at the start, during, or at the end of a crawl. There’s a ‘segments’ column with coloured labels in each tab.
When segments are set up, the right hand ‘Issues’ tab includes a segments bar, so you can quickly see where on the site the issues are at a glance.
You can then use the right-hand segments filter, to drill down to individual segments.
There’s a new right-hand ‘Segments’ tab with an aggregated view, to quickly see where issues are by segment.
You can use the Segments tab ‘view’ filter to better analyse items like crawl depth by segment.
Or which segments have different types of issues.
Once set-up, segments can be saved with the configuration. Segments are fully integrated into various other features in the SEO Spider as well.
In crawl visualisations, you can now choose to colour by segment.
You can also choose to create XML Sitemaps by segment, and the SEO Spider will automatically create a Sitemap Index file referencing each segmented sitemap.
Within the Export for Looker Studio for automated crawl reports, a separate sheet will also be automatically created for each segment created. This means you can monitor issues by segment in a Looker Studio Crawl Report as well.
4) Visual Custom Extraction
Custom Extraction is a super powerful feature in the SEO Spider, but it’s also quite an advanced feature and many users couldn’t care less about learning XPath or CSSPath (understandably, so).
To help with this, you’re now able to open a web page in our inbuilt browser and select the elements you wish to extract from either the web page, raw HTML or rendered HTML. We’ll then formulate the correct XPath/CSSPath for you, and provide a range of other options as well.
Just click the web page icon to the side of an extractor to bring up the browser –
Input the URL you wish to scrape, and then select the element on the page. The SEO Spider will then highlight the area you wish to extract, and create an expression for you, with a preview of what will be extracted based upon the raw or rendered HTML.
You can switch to Rendered or Source HTML view and pick a line of HTML as well. For example, if you wish to extract the ‘content’ of an OG tag –
You can then select the attribute you wish to extract from the dropdown, and it will formulate the expression for you.
In this case below, it will scrape the published time, which is shown in the source and rendered HTML previews after selecting the ‘content’ attribute.
For those of you that have mastered XPath, CSSPath and regex, you can continue to input your expressions in the same way as before.
At the moment this new feature doesn’t help with extracting JS, but we plan on extending this functionality to help scrape conceivably anything from the HTML.
5) 3D Visualisations
If you’re a fan of our crawl visualisations, then you’ll dig the introduction of a 3D Force-Directed Crawl Diagram, and 3D Force-Directed Directory-Tree Diagram.
They work in the same way as existing crawl and directory-tree visualisations, except (yes, you guessed it) they are 3D and allow you to move around nodes like you’re in space, and ‘within’ the visualisation itself.
These visualisations are restricted to 100k URLs currently, otherwise your machine may explode. At 100k URLs, the visualisation is also slightly bonkers.
Do they help you identify more issues, more effectively? Not really.
But they are fun, and sometimes that’s enough.
6) New Filters & Issues
There’s a variety of new filters and issues available across existing tabs that help better filter data, or communicate issues discovered.
The new filters available include the following –
In some cases new data is collected and reported alongside the new filter. For example, we now collect image element dimension attributes, display dimensions and their real dimensions to better identify oversized images, which can be seen in the image details tab and various bulk exports.
Featured Replies
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.