How to post request to my backend server with long string argument by Graphql

I wanna post a request from my website to back-end (django) by GraphQL. Here is my GraphQL request. Probably because the argument is too long, the request cannot reach my server but there is no error threw by Graphql which still show the status “200”. How can I solve this problem?

{
  sendMail(mailList: "abv@gmail.com abv@gmail.com abv@gmail.com abv@gmail.com ........... (1000 mail here )", content: "testing") {
    data {
      counter
    }
    ok
  }
}

Is there any way to specify a variablename in df.loc keyword?

I am making a revenue forecast model wherein, there is one column that i require is dependent upon the current month and is a variable.

I have marked the variables as curr_month as user input. Depending upon this, i have also derived another variable rem_month for remaining months.

Now, the column that i need to create is summation of remaining columns except current month. Also, the input columns change as per follows:

Jan-May: Planned
June-October: Mid
November: Final
December: Planned for next year.

I created a list for each month as follows:

Revenue_Jan = ['P_2019_Feb','P_2019_Mar','P_2019_Apr','P_2019_May','P_2019_Jun','P_2019_Jul','P_2019_Aug','P_2019_Sep','P_2019_Oct','P_2019_Nov','P_2019_Dec']
Revenue_Feb = ['P_2019_Mar','P_2019_Apr','P_2019_May','P_2019_Jun','P_2019_Jul','P_2019_Aug','P_2019_Sep','P_2019_Oct','P_2019_Nov','P_2019_Dec']
Revenue_Mar = ['P_2019_Apr','P_2019_May','P_2019_Jun','P_2019_Jul','P_2019_Aug','P_2019_Sep','P_2019_Oct','P_2019_Nov','P_2019_Dec']
Revenue_Apr = ['P_2019_May','P_2019_Jun','P_2019_Jul','P_2019_Aug','P_2019_Sep','P_2019_Oct','P_2019_Nov','P_2019_Dec']
Revenue_May = ['P_2019_Jun','P_2019_Jul','P_2019_Aug','P_2019_Sep','P_2019_Oct','P_2019_Nov','P_2019_Dec']
Revenue_Jun = ['M_2019_Jul','M_2019_Aug','M_2019_Sep','M_2019_Oct','M_2019_Nov','M_2019_Dec']
Revenue_Jul = ['M_2019_Aug','M_2019_Sep','M_2019_Oct','M_2019_Nov','M_2019_Dec']
Revenue_Aug = ['M_2019_Sep','M_2019_Oct','M_2019_Nov','M_2019_Dec']
Revenue_Sep = ['M_2019_Oct','M_2019_Nov','M_2019_Dec']
Revenue_Oct = ['M_2019_Nov','M_2019_Dec']
Revenue_Nov = ['F_2019_Nov','F_2019_Dec']
Revenue_Dec = ['P_2020_Jan','P_2020_Feb','P_2020_Mar','P_2020_Apr','P_2020_May','P_2020_Jun','P_2020_Jul','P_2020_Aug','P_2020_Sep','P_2020_Oct','P_2020_Nov','P_2020_Dec']

Now, i am planing to create a final column “Landing” which will be sum of all the columns mentioned in the specific list as per the currert monmth.

#var="Revenue_"+ curr_month
#print (var)

--> Revenue_Sep

Now i plan to use df.loc:

#df['Landing']= df.loc[:,Revenue_Sep].sum(axis=1)

And this gives me correct output.

But in case i decide to use variable it fails stating “Revenue_Sep column not defined in dataframe”

#df['Landing']= df.loc[:,var].sum(axis=1)       #Need help with this.

Since rest of my model is completely ready, i just need help with this final statement so that i need not go making changes for entire model again.

Attempting to use utf8 in WordCloud

I am attempting to create a very simple word cloud using some pdfs I have converted to text files.

text = open(path.join(d, '117feastsultanenglish.txt')).read()

Initially I got this error:

Traceback (most recent call last):


File "<stdin>", line 1, in <module>
  File "C:UsersSams PCAppDataLocalProgramsPythonPython37libencodingscp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1336: character maps to <undefined>

I looked it up, and it appears this is because my text files are in a format (UTF-8) that needs to be specified for the program to read. To that end, I found a solution someone had ended by added in a simple encode command:
text = open, encoding="utf8" (path.join(d, '117feastsultanenglish.txt')).read()

However this gave me this error: `

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable

From my understanding, this means my string “utf8” cannot be used; however I don’t know how else to make it understand/read my text document.

How to maximize revenue – Python

I have a large df consisting of hourly share prices. I was hoping to find the optimal buy price and sell price to maximize earnings (revenue – costs). I have no idea what the maximizing buy/sell prices would be, therefore my initial guess is a wild stab in the dark.

I tried to use Scipy ‘minimize’ and ‘basin hopping’. When I run the script, I appear to be getting stuck in local wells, with the results barely moving away from my initial guess.

Any ideas on how I can resolve this? is there a better way to write the code, or a better method to use.

Sample code below

import pandas as pd
import numpy as np
import scipy.optimize as optimize

df = pd.DataFrame({
    'Time': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
    'Price': [44, 100, 40, 110, 77, 109, 65, 93, 89, 49]})

# Create Empty Columns
df[['Qty', 'Buy', 'Sell', 'Cost', 'Rev']] = pd.DataFrame([[0.00, 0.00, 0.00, 0.00, 0.00]], index=df.index)


# Create Predicate to add fields
class Predicate:
    def __init__(self):
        self.prev_time = -1
        self.prev_qty = 0
        self.prev_buy = 0
        self.prev_sell = 0
        self.Qty = 0
        self.Buy = 0
        self.Sell = 0
        self.Cost = 0
        self.Rev = 0

    def __call__(self, x):
        if x.Time == self.prev_time:
            x.Qty = self.prev_qty
            x.Buy = self.prev_buy
            x.Sell = self.prev_sell
            x.Cost = x.Buy * x.Price
            x.Rev = x.Sell * x.Price
        else:
            x.Qty = self.prev_qty + self.prev_buy - self.prev_sell
            x.Buy = np.where(x.Price < buy_price, min(30 - x.Qty, 10), 0)
            x.Sell = np.where(x.Price > sell_price, min(x.Qty, 10), 0)
            x.Cost = x.Buy * x.Price
            x.Rev = x.Sell * x.Price
            self.prev_buy = x.Buy
            self.prev_qty = x.Qty
            self.prev_sell = x.Sell
            self.prev_time = x.Time
        return x


# Define function to minimize
def max_rev(params):
    global buy_price
    global sell_price
    buy_price, sell_price = params
    df2 = df.apply(Predicate(), axis=1)
    return -1 * (df2['Rev'].sum() - df2['Cost'].sum())


# Run optimization
initial_guess = [40, 90]
result = optimize.minimize(fun=max_rev, x0=initial_guess, method='BFGS')
# result = optimize.basinhopping(func=max_rev, x0=initial_guess, niter=1000, stepsize=10)
print(result.x)

# Run the final results
result.x = buy_price, sell_price
df = df.apply(Predicate(), axis=1)
print(df)
print(df['Rev'].sum() - df['Cost'].sum())

Append 2 data frames into 1 new data frame

I am trying to combine data in 2 different dataframes into 1 new dataframe and both data frames having their columns jumbled up.

I would like to combine the data so that df1 and df2 values are in DF3 where ‘Ref’ values from df1 and df2 appear under df3 ‘Ref’ column,
‘Amount’ values from df1 and df2 appear in df3 ‘Amount’ column and so on

df1

Ref Amount  Receiver    Payer   Month
1   2000    X           Chris   Jan
2   2222    Y           Jinnn   Jan
3   3002    Z           Chhhh   Jan
4   10000   ZZ          BBBB    Jan
5   25233   ZZZ         CCCCC   Jan

df2

Ref Month   Receiver    Payer   Amount
1   Feb      111         AAA    3000
2   Feb      222         BBB    4000
3   Feb      333         CCC    5000
4   Feb      444         DDD    6000
5   Feb      555         EEE    6000

df 3

Ref Amount  Receiver    Payer   Month
1   2000           X    Chris   Jan
2   2222           Y    Jinnn   Jan
3   3002           Z    Chhhh   Jan
4   10000         ZZ    BBBB    Jan
5   25233        ZZZ    CCCCC   Jan
1   3000         111    AAA     Feb
2   4000         222    BBB     Feb
3   5000         333    CCC     Feb
4   6000         444    DDD     Feb
5   6000         555    EEE     Feb

Tried the code below but I received results which I’m not expecting. There are additional columns in the new data frame which I do not want.

Is concat the correct method to use?

Thanks for the guidance

I have tried to code using the logic below.

import pandas_datareader.data as pdr


import pandas as pd
import numpy as np

df1 = pd.read_excel("C:\Month1.xlsx")
df2 = pd.read_excel("C:\Month2.xlsx")

df_3 = pd.concat([df1, df2], ignore_index=True)

GPU out of memory in Server using SSH

I’m running Github Python code and it works fine in some conditions so it’s not a problem in code.
Code is on Pytorch with ResNet as backbone and is running in Server by SSH interpreter in my local PC using PyCharm. My server configurations are

  • Ubuntu 16.04
  • 4 Nvidia GTX 2080Ti cards
  • CUDA 10
  • RAM 128GB
  • Python 3.6
  • Torch 1.2.0 and torchvision 0.4.0

When I run my code on GPU (index) 2 and 3, I got “CUDA out of memory error” even I used parallel GPUs (2 and 3) but when I run same code on GPU 0 or 1 then it works fine. GPU 0 and 1 are using by other members. Sometimes GPU 0 or 1 usage is 50% and my code works fine on 50% of free GPU 0 or 1.
Below is my server GPUs detail

[0] GeForce RTX 2080 Ti | 83'C,  73 % | 10505 / 10989 MB 
[1] GeForce RTX 2080 Ti | 71'C,  77 % |  5317 / 10989 MB
[2] GeForce RTX 2080 Ti | 48'C,   1 % |     0 / 10989 MB
[3] GeForce RTX 2080 Ti | 49'C,   0 % |     0 / 10989 MB

I searched the same problem on stackoverflow but I’m unable to solve my problem.

pytorch out of GPU memory

Why pytorch needs much more memory than it should?

One gpu uses more memory than others during training

Let’s suppose there is problem in code but my question is that why it works fine in GPU 0 and 1 even 50% of free GPU. I faced this problem first time. One more thing, before 1 week server was creating problem, sometimes all the GPUs were stop to work. I was unable to run new command when GPUs were stopped but if there was any old command (training) was going on. After restarting it worked fine.
I need some valuable suggestions that what should I do now? I already restarted my server but same problem. To check the code I tested another code so same problem on both projects.

Tensorflow Cudart problems

I’m relatively new to python for AI usage, so bear with me here. I’ve been following along with tutorials on how to train OpenAI’s GPT-2. When I try to train my model I get what I posted below.
I’m using CUDNN 10.1 and Tensorflow 2.0. I had issues with the CUDART61_100.DLL files so I followed along with this tutorial now all I get is the below.
https://www.joe0.com/2019/10/19/how-resolve-tensorflow-2-0-error-could-not-load-dynamic-library-cudart64_100-dll-dlerror-cudart64_100-dll-not-found/

Traceback (most recent call last):
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow.py”, line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow_internal.py”, line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow_internal.py”, line 24, in swig_import_helper
_mod = imp.load_module(‘_pywrap_tensorflow_internal’, fp, pathname, description)
File “C:UsersalexvAppDataLocalProgramsPythonPython37libimp.py”, line 242, in load_module
return load_dynamic(name, filename, file)
File “C:UsersalexvAppDataLocalProgramsPythonPython37libimp.py”, line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “train.py”, line 9, in
import tensorflow as tf
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflow__init__.py”, line 28, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpython__init__.py”, line 49, in
from tensorflow.python import pywrap_tensorflow
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow.py”, line 74, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow.py”, line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow_internal.py”, line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File “C:UsersalexvAppDataLocalProgramsPythonPython37libsite-packagestensorflowpythonpywrap_tensorflow_internal.py”, line 24, in swig_import_helper
_mod = imp.load_module(‘_pywrap_tensorflow_internal’, fp, pathname, description)
File “C:UsersalexvAppDataLocalProgramsPythonPython37libimp.py”, line 242, in load_module
return load_dynamic(name, filename, file)
File “C:UsersalexvAppDataLocalProgramsPythonPython37libimp.py”, line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.

How to change the datetime in a function in python

The original code was shown below.

def fn(row):
    d1 = row.sampletime
    d2 = d1 + pd.Timedelta(nDays - 1, 'D')
    return pd.Series(cl.loc[d1:d2].values.reshape((1, -1),
        order='F').squeeze(), index=cols) 
outputall = outputall.join(outputall.apply(fn, axis=1))
outputall

I want to change the date sampletime to 3 days earlier so I added 2 more lines of code as below:

def fn(row):
    d = datetime.timedelta(days = 3)               #new line 
    df1['sampletime'] = df1['sampletime'] - m       #new line

    d1 = row.sampletime
    d2 = d1 + pd.Timedelta(nDays - 1, 'D')
    return pd.Series(cl.loc[d1:d2].values.reshape((1, -1),
        order='F').squeeze(), index=cols) 
outputall = df1.join(df1.apply(fn, axis=1))
outputall

I received the error

ValueError: columns overlap but no suffix specified: Index([‘maxtemp1’, ‘maxtemp2’, ‘maxtemp3’, ‘mintemp1’, ‘mintemp2’, ‘mintemp3’, ‘rainfall1’, ‘rainfall2’, ‘rainfall3’, ‘wind1’, ‘wind2’, ‘wind3’, ‘day_week1’, ‘day_week2’, ‘day_week3′], dtype=’object’)

Fill an array with random names

i’m entirely new at Python and i have a cool idea i want to put to work,

futurely i want to learn neural nets.

but my question right now is :

how to create an array named “names” and fill it with random generated names (the names don’t have to exist, could be like = “asdddds asdasd”

i want to make massive array with this.

like :

names[1000000000]

i have no clue about random generated results.

and fullfil al the results with a random generated name.

Thank you!