You can replace the page contents with the same contents replacing JS
using regexes or selectors
not sure if there is selectors support in Splash
thank you. will keep working on it.
no problem, good luck
thank you. it’s much appreciated. I’ll post an update with the solution.
Hi all, I’ve been working on a drop-in replacement for splash that utilises google chrome, in particular, the devtools api to implement the splash HTTP api. It can be easily deployed as a docker container and the repo is hosted at https://github.com/chuckus/chromewhip. It’s still well in early alpha but the container has working functionality. My motivation was simply to start getting some practice with asyncio. Any suggestions, comments or improvements, please file an issue or pull request :)
@chuckus sounds very cool. Will take a look.
Hello! What are system requirements for running Splash container in production? I run into memory leaking with 3.0 docker container, oom killing it with 6581516kB taken after 120k+ processed requests.
Any advice for reducing memory consumption?
I've requested splash:html only, without processing any images.
Hi, i was wondering how to use proxy with scrapy-splash while also using render.html as an endpoint, splash is running through the docker container
Dear all, how to capture the traffic request to the internet from Splash especially when the request goes out with proxy?
Somebody there #
The difference between them
@habout632 I'm a few days late to reply. Can you give a bit more context? What differences would you like to know? Have you read the docs?
HI guys ? , is scrapinghub hiring remote software engineers ? :D
Hey there, has anyone ever had an issue with Splash Docker container crashing with a 139 error code?
I'm using it with the scrapy-splash plugin and it fails randomly while scraping
I tried an higher verbosity with -v2 and this is always the last line splash_1 | 2017-10-17 16:06:27.016674 [render]  [lua_runner] send (lua) (b'return', <Lua table at 0x32e1750>)
hi can anyone help me with a problem? Its quite urgent actually
Hey there, how can I make sure that the image fully loads (not only just the img tag in DOM)
@yadavankit have you tried using a lua script to wait?
Something like this
Hi, does anyone knows how to fully disable caching? I'm interested on full HARs every time I request a website.
Greetings all. I'm having an issue with the splash docker container. I'm trying to attach a custom module through Docker's volume mounting option and the documentation says that you should run it with the argument "--lua-sandbox-allowed-modules" but Docker is saying that isn't a valid argument. Any ideas? I've disabled sandbox mode for now using the "--disable-lua-sandbox argument" but I'd like to get the other way working.
Hi is it possible to upgrade python version from 3.5.2 in scrapinghub/splash docker image to python 3.6.5 ?
I got a weird problem, whenever i try to scrape a single page splash returns all the items but with multiple it just returns few!
When scraping single it returns 340 items while with multiple requests it returns 20. I have a lua script that scrolls the page 2 time and then click on a button until the button hides. The page loads more items on the button click
Hello, I am having an issue with splash. when running the "docker run -p 5000:5000 -p 5023:5023 scrapinghub/splash" commmand I am getting the "Starting factory <twisted.web.server.Site object at 0x7efc279d97f0>" hang time in my powershell (windows 10), and localhost:8050 shows nothing. in additon, when i run the example script from the docs it returns:"ConnectionError: HTTPConnectionPool(host='localhost', port=8050): Max retries exceeded with url: /run (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000209C41382B0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))" i cannot locate port 8050 with a netstat search. any ideas on how to fix this?
Hello!! I'm using splash and scrapy to crawl multiple pages from a single spider. In my desktop computer (ubuntu) works just fine! But when I clone the repo in a virtual machine, also ubuntu, all items parsed return None. Any feedback...? I't would be great! Thanks! (apparently I has to be that splash can't render pages in time... I'm also using AUTOTHROTTLE)
Increased 'wait' arg in SplashRequest to 5 secs, and it seems to work... Can some expert in Splash explain relationship between 'wait' argument and Scrapy's AUTOTHROTTLE? Thanks in advance!!
If anyone is out there, how do you/can you handle a web page sending an attachment response?
Hello Everyone, Does anyone have a working code sample where all cookies are retained by splash?
lua script and 2. Scrapy code. Its getting really harder for us, and we are on a tight deadline. Please Help.
Just like a regular browser, sites replace old cookies with new ones and delete the ones that need to be changed. Is this even possible with splash?
Muhammad Saad Ali
Hey , can somebody please help with an issue I am trying to scroll pages using lua script but the scroll doesn't get updated ,please refer to this issue . scrapinghub/splash#898 Any help would be greatly appreciated . Regards
Hi, I have problem on looping the SplashRequest using scrapy_splash in PyCharm. It is running fine if I run for one url, however, it does not work fine if I put the Splash Request in a for loop. Appreciate if there is any help. Regards.