multithreading - Python multithread per block -


i'm executing following python code, when launch many threads remote api (google api) returns:

 <httperror 403 when requesting https://www.googleapis.com/prediction/v1.6/projects/project/trainedmodels/return_reason?alt=json returned "user rate limit exceeded"> 

i have around 20k objects need launch @ time processed api. works fine small amount of objects, how slow down or send request blocs ?

from threading import *  collection_ = [] lock_object = semaphore(value=1)  def connect_to_api(document):     try:         api_label = predictor.make_prediction(document)         return_instance = returnreason(document=document) # create return reason object         lock_object.acquire()                             # lock object         collection_.append(return_instance)     except exception, e:         print e     finally:         lock_object.release()  def factory():     """      :return:     """      list_of_docs = file_reader.get_file_documents(file_contents)     threads = [thread(target=connect_to_api, args=(doc,)) doc in list_of_docs]     [t.start() t in threads]     [t.join() t in threads] 

rate processing whole problem, sleep not enought long term tasks.

i suggest queues (rq pretty simple) , following article helpful: http://flask.pocoo.org/snippets/70/


Comments

Popular posts from this blog

sublimetext3 - what keyboard shortcut is to comment/uncomment for this script tag in sublime -

java - No use of nillable="0" in SOAP Webservice -

ubuntu - Laravel 5.2 quickstart guide gives Not Found Error -