Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jan 31 2019 10:58
    lexilya starred ruflin/Elastica
  • Jan 31 2019 02:14
    AlvarezCasarez starred ruflin/Elastica
  • Jan 30 2019 08:51
    massimilianobraglia commented #1579
  • Jan 30 2019 08:00
    ruflin commented #1597
  • Jan 30 2019 08:00
    ruflin commented #1601
  • Jan 30 2019 07:40
    ruflin commented #1579
  • Jan 29 2019 23:44
    Renrhaf starred ruflin/Elastica
  • Jan 29 2019 18:23
    EliuFlorez commented #1601
  • Jan 29 2019 14:39
  • Jan 29 2019 14:07
    p365labs commented #1579
  • Jan 29 2019 13:58
    Guernouille starred ruflin/Elastica
  • Jan 29 2019 13:11
    massimilianobraglia commented #1579
  • Jan 29 2019 12:09
    ruflin commented #1579
  • Jan 29 2019 10:36
    p365labs commented #1579
  • Jan 29 2019 09:59
    ruflin commented #1601
  • Jan 29 2019 09:54
    thePanz commented #1601
  • Jan 29 2019 09:44

    ruflin on gh-pages

    Site updated at 2019-01-29 09:4… (compare)

  • Jan 29 2019 09:38

    ruflin on 6.1.1


  • Jan 29 2019 09:38

    ruflin on release-6.1.1


  • Jan 29 2019 09:38

    ruflin on master

    Prepare release 6.1.1 (#1602) (compare)

Jan Vansteenlandt
i tried to the bulk update way which does the trick in an acceptible time, and also handles conflicts afterwards to ensure that everything is getting updated correctly
i thought that process would take too much time to ensure a good user experience, but we have a "build time" in our application where for 50k documents, people need to wait for less than a minute
i wonder though what the difference would be if I did it via scripting
any experience on update_by_query vs bulk update?
Alvaro Leon
Hi all
I'm using elastica bundle and i would like to know, if there is a way to collect all indexes defined on configuration?
and making elasticsearch inserts/updates/deletes dynamically depending on $index variable
Alvaro Leon
I'm using symfony v3.3.6
Christian Jost
@lerua83 as far as I know the default behavior is that the elastica bundle will do this for you, when you connected it with doctrine correctly. Just for you as information this channel might not be the right place to ask questions about the symfony bundle since this is the library communicating with elasticsearch, but not the symfony integration.
Alvaro Leon
@cjost1988 Thanks for the response, and sorry for the missunderstanding
Christian Jost
@lerua83 No problem :) You’re welcome. I am using the ElasticaBundle as well.
Juan Paredes
Hello! Is there a concrete example for using the scroll api?
Hello!I'm a developer from China,where can find the document about ruflin/Elastica?
ok,i got it
I'm trying to connect to the elasticsearch of amazon, but my systemadmin has set the access to no username & no password, and now I get 'The security token included in the request is invalid' when I try to connect, does anyone have an example connection config for me? Thank you
Jan Vansteenlandt
Hey all! I was wondering if the latest version of Elastica is compatible with ES6.4, according to the readme it's test with 6.2, but compatible with ^6.x ...
Vitaliy Okulov
hi all, can anybody explain how connection strategy simple works?
Jack Miras

Hey guys,

Does anyone know how to use Elastica to connect into AWS Elasticsearch Service? I've been trying for a couple hours now and I still haven't figured out how to do it.

Oleksandr Savchenko
Hey guys! I have start working on ruflin/Elastica#1548 and faced with issue like @param string|\Elastica\Query $query. Could you advise the best way how to handle it?
Moises Gallego
Hi, I'm working in modify the code of 3.x version to work with PHP7.2. But I have errors in test, without code modifications. I clone the repo, go to 3.x branch and execute make tests, and get some errors and failures in tests
for example: 1) Elastica\Test\Query\MatchAllTest::testMatchAllIndicesTypes
Elastica\Exception\ResponseException: all shards failed
Does anyone know what it could be?
Oleksandr Savchenko
@mgallego hello! actually all tests are passing on php 7.2. you can check builds here https://travis-ci.org/ruflin/Elastica/builds/465984009?utm_source=github_status&utm_medium=notification (it's for one of pull requests)
Moises Gallego
but I'm working on 3.x branch
with old code
Oleksandr Savchenko
Stas Pochepko
Hi, everyone.

Please, help me with my exception

`Error: Method Nanobe\Nobe\Nobe::__toString() must not throw an exception, caught Elastica\Exception\Bulk\ResponseException: Error in one or more bulk request actions:

update: /my_replaced_index/my_replaced_type/e2b116bc-c5f1-4a15-8578-e02da871f924 caused blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];`

How i can resolve this issue? I did not get information in official repository about blockers. A tried use this curl -XPUT -H "Content-Type: application/json" http://elastic:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' but it did not get affect.
make sure you have LOTS of disk space. Much more than you think you need.

Hello, I have a question.

Will the ruflin/elastica package support for elasticsearch sql? https://www.elastic.co/guide/en/elasticsearch/reference/6.4/sql-getting-started.html
A year passed since it become free to use, but I can't find it in the lib...

The Bonsai ElasticSearch runs on port 443, but I get an SSL error when trying to connect. I'm using FOSElasticaBundle, but can't see any configuration options there that allow for SSL. Any suggestions? Is there something else I can configure?
Hello, is it possible to install Elastica 7 via Composer? If so, how?
@localhorst I found out how, by using the Github repository in composer with version dev-master:
  "repositories": [
      "type": "vcs",
      "url": "https://github.com/ruflin/Elastica"
  "name": "ppw/ppw-search",
  "license": "proprietary",
  "minimum-stability": "dev",
  "prefer-stable": true,
  "require": {
    "ruflin/elastica": "dev-master"
John Doe

I have a project where I already updated the mapping definitions of our indices to be Elasticsearch 7 compabible, i.e. to omit mapping types. Elastica seems to force the parameter include_type_name for index creation, which forces the mapping definition to contain a mapping type.

See lib/Elastica/Index.php:

public function requestEndpoint(AbstractEndpoint $endpoint)
    $cloned = clone $endpoint;

    if ($endpoint instanceof Create || $endpoint instanceof \Elasticsearch\Endpoints\Indices\Mapping\Put) {
        $cloned->setParams(['include_type_name' => true]);

    return $this->getClient()->requestEndpoint($cloned);

Will this stay for Elastica 7? Is there another way of creating indices, which are already Elasticserach 8 compatible?

Igor Nóbrega
Good evening guys. I'm updating a project dependencies and I got something weird using 6.1.1 with symfony/monolog-bundle 3.4.0. Now I've this [2019-09-03 18:14:40] app.DEBUG: Elastica Request log and seems to be in loop cause it's getting bigger and bigger untill it breaks the app. Is there a way to disabled this Elastica logs?
An example:
[2019-09-03 18:14:41] app.DEBUG: Elastica Request {"request":{"path":"_bulk","method":"POST","data":"{\"index\":{\"_index\":\"monolog-2019.09.03\",\"_type\":\"record\"}}\n{\"message\":\"Elastica Request\",\"context\":{\"request\":{\"path\":\"_bulk\",\"method\":\"POST\",\"data\":\"{\\"index\\":{\\"_index\\":\\"monolog-2019.09.03\\",\\"_type\\":\\"record\\"}}\n{\\"message\\":\\"Elastica Request\\",\\"context\\":{\\"request\\":{\\"path\\":\\"_bulk\\",\\"method\\":\\"POST\\",\\"data\\":\\"{\\\\"index\\\\":{\\\\"_index\\\\":\\\\"monolog-2019.09.03\\\\",\\\\"_type\\\\":\\\\"record\\\\"}}\\n{\\\\"message\\\\":\\\\"Elastica Request\\\\",\\\\"context\\\\":{\\\\"request\\\\":{\\\\"path\\\\":\\\\"_bulk\\\\",\\\\"method\\\\":\\\\"POST\\\\",\\\\"data\\\\":\\\\"{\\\\\\\\"index\\\\\\\\":{\\\\\\\\"_index\\\\\\\\":\\\\\\\\"monolog-2019.09.03\\\\\\\\"
Fritz Meissner
hi folks. Elastic theory question here: do I need one index per type of data that I want to query?

like if I denormalise a relational DB with products that belong to product types so my index docs look like this:

product1 type1
product2 type1
product3 type2
product4 type3

obviously this index can answer queries about products, but can it also answer queries about types?

Hello there! Is there a planning to release a version compatible with ES 7 ?
this error in elastic @ruflin @p365labs
The Response content must be a string or object implementing __toString(), "object" given.


Hi Folks,
Is there a command to index all once? I've got Elastica powered by heyday/silverstripe-elastica for SilverStripe projects up and running and can ingest data into ElasticSearch after a Publish however can't find a facility to index all of my configured dataobjects once to essentially get the entire database rather than just after saving individual records
Hello, how can I fix this?
What is the step to install ruflin/elastica using composer
Michael J Burling
Hello friends. Just curious about the AWS Elasticsearch Service support in Elastica. I'm using commercial SugarCRM software that uses Elastica and it seems to work just fine AWS ES Service when the protocol is http, but https doesn't appear to work. Errors coming through are displaying the search domain with port 80, so it seems that they aren't wrapping the library correctly. I opened a support ticket with them and they're telling me that it it's not their system. Erroneously, they're attributing Elastica's AwsAuthV4 to AWS engineering and if there are errors with it, that it goes beyond the scope of their application, so, it's my problem. I'm not even sure where to begin here.
Maybe if I start with a toy example connecting to an AWS ES service, that would be enough to show them that it ain't Elastica.
Hello all. Does anyone know about any issues about ES index not being updated on bulk update? I have a project in Symfony 4.2 with FOSElasticBundle 6.0.x-dev and when I do a bulk update and flush the data, the ES index is not getting updated. Could be that the Doctine listener is doesn;t get triggered ?
Here, what's the plan for 7.0.0 release? I'd like to get the date so we can plan ahead. Thanks