I'm seeking for advice, as i've been struggling for days with problems under heavy load of phoenix query server.
To explain a bit more : i have 3 phoenix query servers behind knox. I load balanced them through the ha role in knox.
I'm accessing phoenix through php webservices using the simba's odbc driver. When testing each webservice, everything works fine.
(Only curious thing is that it takes nearly one second to establish first connection (odbc_connect) to phoenix, then the odbc_connect is very fast.)
When i open my website, which takes quite a load (but nothing that a single mysql server used to handle, so i'm wondering why), in one minute, i get my apache log filled with some errors regarding the failed connections to phoenix :
mainly to different errors (i use the error_log php function to log in these files what odbc_error_message returns:)
after a failed odbc_connect:
S1000 ## [unixODBC][Hortonworks][Phoenix] (40) Error with HTTP request, response code: 500
after a failed cluster query (which might result for the previous error):
S1000 ## [Hortonworks][Phoenix] (2100) An error occured while preparing statement: \n8org.apache.calcite.avatica.proto.Responses$ErrorResponse\x12\x1a\x13\n\x1a\x12org.apache.calcite.avatica.NoSuchConnectionException\n\tat org.apache.calcite.avatica.jdbc.JdbcMeta.getConnection(JdbcMeta.java:565)\n\tat org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:690)\n\tat org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:209)\n\tat org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1199)\n\ta##select next value for "akinator_device_tags_id" as "nextvalue"
Would you have any clue or insight about that ? Am i missing some important phoenix config options ?
At first i thought about write latency, linked to index update in phoenix, but i also get the errors even when deactivating upsert queries, allowing only select ones.
Interactive mode is required for audits logs on bastion for security purpose. The problem with this feature is that it breaks TCP Forwarding andyou can't use tools like ansible to deploy/update tools onto instances of this ADP cluster.
You have two options to bypass that:
1- Deactivate interactive mode on bastion - Step-by-step guide
#ForceCommand /opt/bastion/bastion #AllowTcpForwarding no
This action is permanent. No code to force interactive mode at boot time.
systemctl restart sshd
Of course this deactivation can be done temporarily. An automated procedure based on Ansible for example can be setup to deactivate this interactive mode just for the time it takes to apply some changes:
2 - Create a new bastion - Step-by-step guide
This instance is not registered in Freeipa. So, DNS resolution and connection setup can take a long time.
If you use tools like Ansible to automate deployment, first you have to increase ssh connection timeout or add an entry with the private ip of this new bastion in all /etc/hosts of instances you want to connect to.
This is the link to the opensource code of the page generating the credentials. It should help you in designing what you need.
Hello, and for the generating password on the cluster part. I saw the code on your github. It is the code to create the passwords, but I didn't find the endpoint to send the passwords to my ovh servicename (analytics cluster id). Is this endpoint exists to send a post or put request?
<value>https://s3.<public cloud region>.cloud.ovh.net</value>
I want to build batch-based ETL from RDBMS "SQL Server" Using apache spark . My Spark cluster is running part of the Cloudera Application.
My question is Where should I store the ETL job watermark for example the maximum TIMESTAMP so the next job will get the records which have a bigger timestamp in the next batch run?
Should I use a Hive table Or there is a better approach to store this data so it can be used in the next jobs?