EntitySet.normalize_entityI've been using the
time_index_reduceparameter. In my example, I can request the
lastinstance of a user's details as I normalise the sessions table. However, this appears to not be time-aware. The
lastinstance of a user's details, can appear AFTER the
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:65:21: error: expected ‘)’ before ‘,’ token
MATHCALL_VEC (sin,, (Mdouble x));
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:81:22: error: unknown type name ‘sincos’
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:81:29: error: expected declaration specifiers or ‘...’ before ‘,’ token MATHDECL_VEC (void,sincos,,
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:82:3: error: expected declaration specifiers or ‘...’ before ‘(’ token
(Mdouble x, Mdouble *sinx, Mdouble *cosx));
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:100:21: error: expected ‘)’ before ‘,’ token MATHCALL_VEC (exp,, (Mdouble x));
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:109:21: error: expected ‘)’ before ‘,’ token MATHCALL_VEC (log,, (Mdouble x));
/usr/include/x86_64-linux-gnu/bits/mathcalls.h:153:21: error: expected ‘)’ before ‘,’ token MATHCALL_VEC (pow,, (Mdouble x, Mdouble y));
error: command 'gcc' failed with exit status 1
Rolling back uninstall of psutil
Command "/home/nbuser/anaconda2_20/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-build-T67_lN/psutil/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-gEaonv-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-T67_lN/psutil/
You are using pip version 9.0.3, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command
However, I was able to install on-premise using the command
pip install featuretools , but on azure jupyter notebook I am unable
tried even !pip install --ignore-installed featuretools but not wokring
Hi guys, I have some problems with circleci (python2.7) validation. I am working on this pull request: Featuretools/featuretools#323
I try to reproduce locally the errors, but I can not. I do something like this:
virtualenv -p python2.7 env
pip install -r test-requirements.txt
make installdeps lint
But, I have no errors. What can I do? Do you have any developer documentation? This is my first time using circleci...
Hi there - I usually post on SO for FT questions, but thought that this discussion might need some more interaction.
Today, I was looking at advanced custom primitives and came across a stackoverflow question: https://stackoverflow.com/questions/53579465/how-to-use-featuretools-to-create-features-from-multiple-columns-in-single-dataf
The user is trying to create a primitive which sums columns conditionally, based on whether the row is within a timedelta. So, sum only cells where the timestamp is within the last 3 days.
I think that this is possible if the user creates a transform primitive, which just outputs the value if the cell is within a time range, and 0 if otherwise. Then, they can use the
sum aggregation primitive.
However, I'm curious to know if this is possible in a single aggregation primitive, or whether there is another mechanism for achieving this. It seems very wasteful to store a column of mostly zeros just to take its sum later on.
ft.dfsbut that only gives you an lower bound (as i understand). What i mean is for example to create the next features:
from featuretools.primitives import IdentityFeature import featuretools as ft es = ft.demo.load_mock_customer(return_entityset=True) from featuretools.primitives import make_agg_primitive, make_trans_primitive from featuretools.variable_types import Text, Numeric def word_count(column): ''' Counts the number of words in each row of the column. Returns a list of the counts for each row. ''' word_counts =  for value in column: words = value.split(None) word_counts.append(len(words)) return word_counts # Next, we need to create a custom primitive from the word_count function. WordCount = make_trans_primitive(function=word_count, input_types=[Text], return_type=Numeric) # Since WordCount is a transform primitive, we need to add it to the list of transform primitives DFS can use when generating features. feature_matrix, features = ft.dfs(entityset=es, target_entity="customers", agg_primitives=["sum", "mean", "std"], trans_primitives=[WordCount])