A computer algebra system written in pure Python http://sympy.org/ . To get started to with contributing https://github.com/sympy/sympy/wiki/Introduction-to-contributing
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
Implemented leading term and ns… (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
made changes (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
added tests for leading terms f… (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
Changed the outputs being retur… (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
Changed leading term being ret… (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
Removed Redundant block from le… (compare)
anutosh491 on GSoC_Pr4.3_Implementing_some_series_method_for_uppergamma_lowergamma_expint_and_other_errors_functions
Implemented some series methods… (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
Removed Order term (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
added test cases for limits and… (compare)
anutosh491 on GSoC_Pr4.2_Implementing_leading_term_and_series_methods_for_the_frac_function
Implemented leading term and ns… (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
This commit does the following … Refactored mrv_leadterm in grun… Fixed errors arising for limits… and 3 more (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
fix(integrals): handle degenera… functions: Generalised Dirichle… author: update Megan Ly in .mai… and 23 more (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
Changed if condition for bessel… (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
Improved code quality (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
changed is function to equality… (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
fixed failing tests (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
Fixed code quality (compare)
anutosh491 on GSoC_Pr4.1_Implementing_few_series_methods_for_bessel_functions
Added case where number of term… (compare)
E[X*Y] = E[X]*E[Y]
I was expecting E[X]=0.1 then E[X]*E[Y] = 0.1*pdf(Y)
which is closer the otherwise case represented in the screen shot above, without the 0.07..., scalar which haven't figured out where it came from?
symbols('mu sigma', real=True)
it will assume the arguments are real and simplify
evalf()
and others just evaluating using some argument stats.density(X)(x)
. Another issue that seems common is the sympy console after activation sympy.init_session()
throws an error whenever using the python function locals()
. Also, in sympy console executing sympy.stats.Normal(...)
doesn't recognise the submodule stats
, one has to separately import each submodule. Are there any ways to overcome these issues? Thanks!
Hey, I'm new to Sympy.
I'm trying to find a minimum to a function under some constraints. The function has two variables: x,y, and two parameters: l < 0 and u > 0.
The function is:0.5*u**2 - 1.0*y**2/(1 - x)**2 + 1.0*y*(y/(1 - x) + y/x)/(1 - x) + 0.5*(-l - y/x)*(-l*x - y) - 0.5*(u + y/x)*(u*x + y)
and the constraints are:(y >= 1.0e-6) & (l*x + y <= 1.0e-6) & (u*x + y <= u + 1.0e-6)
(l,u parameters == are known at the time of solving. I have to find x,y which gives a minimum given some l,u assignments)
What is the best approach to solve it? Should I use Sympy or other tools?
Thanks!
O(x**N)
in various places. I'm seeing pretty surprising results, for example (Sum(5*6**(-n)*x**n/2, (n, 1, 10)) + Rational("1/2") + O(x**11)).doit()
takes way longer than (Sum(5*6**(-n)*x**n/2, (n, 1, 10)).doit() + Rational("1/2") + O(x**11)).doit()
. The difference is the doit()
on the Sum before adding the order term. I'm also seeing some oddities when raising a polynomial with an order term to a power. Is there any guidance or something I can read to understand what's going on here and how to use order terms to be efficient?
doit()
on my Sums and instead of raising my polynomials to a power, doing a for loop with p_n = expand(p_n * p_1)
. It's not elegant, but it completes in seconds rather than OOMing my machine after several minutes. I'd guess the order term is evaluated at the end of raising something to the nth term.
phi0
for 10*phi1
in the first equation.
exp
scipy.optimize.fsolve
but I have to rewrite the equation every time. Sometimes I' m given eta and sometimes phi. With symbolic computation I could have prevented typing the expression and and given two values of phi and an eta, I could have found the eta at another fi and vice versa for two values of eta and a phi.
The transcendental number `e = 2.718281828\ldots` is the base of the
natural logarithm and of the exponential function, `e = \exp(1)`.
Sometimes called Euler's number or Napier's constant.
Exp1 is a singleton, and can be accessed by ``S.Exp1``,
or can be imported as ``E``.
Examples
========
>>> from sympy import exp, log, E
>>> E is exp(1)
True
>>> log(E)
1