Web Scrapping :- "" TypeError: 'NoneType' object is not subscriptable "". How to fix this issue?

94
September 23, 2021, at 11:10 PM

I have tried to fix this error but I can't.

I am trying to scrape a website and here is how it looks:

the variable str1 contain the web address of the website in the form of string.

my_url = str1
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
data = page_soup.findAll("div", {"class": "_2kHMtA"})

And at last I tried to print this statement:

y = data[2].div.img['alt']
print(y)

When I print this statement it always tries to show that error ! Please Help.

Answer 1

Print your data object. You haven't managed to find the data you're looking for, so you are trying to print the 3rd entry in nothing (hence the failure).

Without knowing your input html I can't say why that is.

AFAIK you can't pass a dict to findAll. Rather you use keyword arguments, like so:

soup.find_all("div", class="_myclass")

Note that findAll is an alias for find_all.

Note also that you can only run find_all on a soup object. You can't run it on the results of a previous find_all object, which is a ResultsSet object (basically a list of soup objects). (Although, of course, you can iterate over those soup objects if you needed to.)

READ ALSO
How to avoid duplicate items in PagingAdapter?

How to avoid duplicate items in PagingAdapter?

I have implemented paging3 for my android projectTo avoid duplicated items, I created a DiffUtil

38
WebRTC Force Stereo in Chrome by editing the sdp config

WebRTC Force Stereo in Chrome by editing the sdp config

I'm trying to implement a workaround for what appears to be a known issue in all browsers but firefox where webRTC audio stream is downgraded to mono from stereo

47
get origin url of 301/302 Forward in PHP/Javascript

get origin url of 301/302 Forward in PHP/Javascript

I have a small question: From different Domains I'm forwarding via 301/302 to one distinct target domainOn the target Server I have a running apache

40
Puppeteer page.$$ returns empty array

Puppeteer page.$$ returns empty array

I'm working on a simple scrapper but I can't get past this issueIt returns an empty array everytime I run it, however the site does contain the elements and returns a NodeList when I run querySelectorAll on the console

54