Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 11 2018 10:21

    asahd on v2.5.2

    (compare)

  • Sep 11 2018 10:21

    asahd on v2.5.2

    (compare)

  • Sep 11 2018 10:19

    asahd on master

    Restore nginx.conf.example (compare)

  • Sep 11 2018 10:05

    asahd on v2.5.2

    (compare)

  • Sep 11 2018 10:04

    asahd on feature-symantic

    (compare)

  • Sep 11 2018 10:04

    asahd on master

    Added full document toggle to s… Added post and patch metadata. Updated Metadata to update stat… and 6 more (compare)

  • Sep 11 2018 10:04
    asahd closed #1232
  • Sep 11 2018 09:05
    asahd labeled #1232
  • Sep 11 2018 09:05
    asahd unlabeled #1232
  • Sep 11 2018 08:42
    asahd synchronize #1232
  • Sep 11 2018 08:42

    asahd on feature-symantic

    Fixes statement creations being… (compare)

  • Sep 10 2018 15:07

    happy-machine on Statement_component

    fixed default naming in Stateme… (compare)

  • Sep 10 2018 12:45
    asahd synchronize #1232
  • Sep 10 2018 12:45

    asahd on feature-symantic

    Changes Forwarding UI, updates … (compare)

  • Sep 10 2018 11:45
    ht2 synchronize #1232
  • Sep 10 2018 11:45

    ht2 on feature-symantic

    Start of shareable dashboards u… Removed filter required paramet… Added validation for shareable … and 49 more (compare)

  • Sep 04 2018 16:29
    cutz opened #1246
  • Sep 04 2018 14:20
    samjaved closed #1245
  • Aug 30 2018 22:20
    ryansmith94 assigned #1242
  • Aug 30 2018 16:43

    happy-machine on Statement_component

    working as Map (compare)

James Parry
@JP-ZS_gitlab
Thanks Marco, will do
Jesus M Bianco T
@jbiancot_twitter
Hi there, we are running LearningLocker 3.18 with Percona Mongo 3.6 in ReplicaSet of 3, we only read from secondaries. Meaning all the inserts go to Mongo1 "Primary", but we have notice something odd, some of the bookmarks we are created, especially the last one, it is being save after the "completed" which is comes after than any bookmarks, as far as I know the call to doing in a synchronize way, let me know your thoughts.
Ahmed-Eid
@Ahmed-Eid

Hi,

I am facing this error when curling the IP

sudo curl -X GET https://URL/data/xAPI/statements -H 'Authorization: Basic NDA0MmZmYjk5NGFjYTRiOGVmZmVhYTFhZmNhOTY2MTEwZDZlZGY3ZTowZTc5M2NiMDkyMDJiNTJmNDFkY2Q0NjYzMmMxNTUzOTZlYjQxOTVi' -H 'Content-Type: application/json' -H 'X-Experience-API-Version: 1.0.3' -H 'cache-control: no-cache'
output:

{"errorId":"79afacc2-ba43-4dc2-b360-cd2b005d4514","message":"Unauthorised"}u

from the error log file,
ubuntu@hostname:/var/log/learninglocker$ tail xapi_stderr-4.log
2021-03-22 14:01:09:461 - error: e0f17d74-7f60-4466-b166-b0b802ccaf3a: jscommons handled - Unauthorised
2021-03-22 14:05:11:157 - error: 6d0d1c7f-52d3-407d-b813-a4d9bb1024b5: jscommons handled - Unauthorised
2021-03-22 14:06:03:801 - error: 2f372f03-9fef-451c-be39-43b23a432ac0: jscommons handled - Unauthorised
2021-03-22 14:09:42:858 - error: 77491e35-75fb-4300-9ac1-eaabfcb79010: jscommons handled - Unauthorised
2021-03-22 14:19:45:541 - error: b9f83dff-6186-42ad-b4d4-31e8a24b9ca1: jscommons handled - Unauthorised
2021-03-22 15:09:25:998 - error: b14a8f2d-0a7a-4f2e-b35d-5ecdaa383d80: jscommons handled - Unauthorised
2021-03-22 20:33:05:023 - error: 684ef2af-63c6-4e33-93d8-c8e45f0bc586: jscommons handled - Unauthorised
2021-03-23 10:46:54:659 - error: 006657c9-dc96-48fa-867a-c7dc7baa6f6a: jscommons handled - Unauthorised
2021-03-23 12:17:10:996 - error: 75c9d7fa-9b37-44e9-8db7-ece1e31eebdd: jscommons handled - Unauthorised
2021-03-23 12:20:23:341 - error: 79afacc2-ba43-4dc2-b360-cd2b005d4514: jscommons handled - Unauthorised


Please advise

Ahmed-Eid
@Ahmed-Eid
this LRS is hosted on Linux contaners ( i cloned it from other LRS container), So i don't know if i missed some configuration or not
Thomas Deleur
@DeleurThomas_twitter
Hi, we are currently exploring Learning Locker and i would like to see if there are any performance numbers with the open source version. I currently have a 3 node mongo cluster, 1 redis and an ec2 instance and the performance i see is little concerning with just 100's of statements. Options call to state end points are taking really taking over a second, post statements are hovering between 500-900ms and very few PUT requests to state end point are between 50-80ms. I would like to know if i am missing anything fundamentally from config stand point. Any pointers would be helpful.
Cameron Beeler
@CameronBeeler
idk what ec2 instance type you are using. The requirements in the documentation suggest at least a t2.medium instance size. anything below that will have issues, in my experience. Good luck.
Thomas Deleur
@DeleurThomas_twitter
these are t2.xlarge instances. i am noticing that very few xapi statements are triggered several hundreds of inserts in mongo. as the inserts are close to 400-500/sec response times start to lag.
Cameron Beeler
@CameronBeeler
@DeleurThomas_twitter , ok, well I'm no expert, but the t2.xlarge is well beyond minimum requirements...I cannot speak to the mongodb clusters. the team i joined began with local mongodb and migrated to Atlas-hosted Mongodb before I arrived. We were satisfied with the Atlas MongoDB performance. They provide a single M0 cluster size that you can evaluate that is free...Good luck!!
jaylco
@jaylco

Hi, I'm looking for some guidance regarding learning locker/Azure Cosmos DB.

Our implementation is as follows: learning locker xAPI is deployed to its own app service and scaled using an Azure service plan, whilst Azure CosmosDB is being used as the MongoDB implementation for the database. We're using xAPI to track SCORM package progress for our learners.

The problem we're facing is significant performance issues with Cosmos (mainly 429 responses) at high traffic periods during the day. I'm wondering if these issues are being caused by Cosmos not being 100% compatible with Learning Locker, and if we should try to migrate the data to a Mongo Altas instance instead? Does anyone have any suggestions or advice from seeing this kind of thing before?

Thomas Deleur
@DeleurThomas_twitter
@CameronBeeler, thanks. With your Atlas instance, can you please share how many inserts/sec and queries/sec it's able to sustain
Nicola Mastrorilli
@ennemastro_twitter
Hi everyone! I'm trying to fine tuning some LL widgets. Is it possible to round numbers (average) inside widget? Is it possible to sort Activities inside widget?Thanks in advance!
ctm8788
@ctm8788
@jaylco i dealt with what your going through. Fortunately for you cosmos db has come a long ways in the last few months since when i started our deployment.
ctm8788
@ctm8788
@jaylco 1. Enable server side retry for cosmos DB. 2. Set your mongo version to 4.0 . (1&2 can be completed by going to features area and enabling). 3. Create custom indexes and drop the wildcard indexes. (You can open a support ticket with Microsoft to enable sensitive logging, this will allow you to see the exact command being executed against the db and allow you to create optional indexes) 4. Configure the collections that are seeing high RU usage to auto scale. That way it's a little cheaper to have higher capacity
jaylco
@jaylco
@ctm8788 Thank you so much for your input! This advice is exactly what I needed, I'll look into it
Thomas Deleur
@DeleurThomas_twitter
Hi Everyone, i am running some benchmarking on Learning locker instance with a single instance LRS. 16 core 32G RAM machine. All i am testing is purely PUT and GET on state API to see if i can get 10k to 11k requests/sec. Optimized nginx to handle the incoming requests and all i am able to get through with this setup is 2200req/sec with 18-20ms response times. It's hard to increase the throughput beyond this, and any increase in requests spikes response time up to few 100 ms. Memory is good (tons left), CPU is only used 25%, disk and n/w doesn't seem to be a problem. However, i am not able to see why the req throughput couldn't be increased beyond 2200r/s. Recommended mongo indexes are in place. Nginx status shows, it's able to get all the requests without issues and it's APM metrics shows that as the throughput increases, time is spent in $upstream_response. Sample: method=GET request_length=609 status=200 bytes_sent=948 body_bytes_sent=342 upstream_status=200 request_time=0.407 upstream_response_time=0.408 upstream_connect_time=0.000 upstream_header_time=0.408. From mongo, it doesnt look like it's stressing either. Freemonitoring mongo doesn't show any visible problems with 7000 read/s and response time 0.05 ms. I am guessing something is gong with the node layer but am not sure what is that. No logs to figure out what's going as well. Any pointers would be appreciated!!
Thomas Deleur
@DeleurThomas_twitter
Screen Shot 2021-04-07 at 10.01.44 PM.png
An update. I believe the workers are not scaling up to take the incoming requests from nginx. I thought of launching more instances of xAPI service (pm2 start xAPI -i 10) but what i observe is that the default 2 instances take up full load and couple of additional instances show up with not much load being shared. Please see the top output. am i doing something wrong in configuring the xAPI service to start with more than 2 instances to begin with?
Thomas Deleur
@DeleurThomas_twitter
I couldn't start with a predefined # of xAPI processes, however, i was able to scale it. I'll test now and post my results. FYI
Pavlo Barvinko
@pavlo16_gitlab
Hi everybody. I am trying for a couple of days already to get the statement forwarding working. I have setup the statement forwarding to my https endpoint, but there is no traffic to it at all from the learning locker no matter what I try. Can anyone confirm that statement forwarding is actually functional? LearningLocker installation is local to mongo and redis on the single Ubuntu 18.04 aws ec2 instance. Any pointers where to look are greatly appreciated. Thanks!
Thomas Deleur
@DeleurThomas_twitter
@pavlo16_gitlab , yes it works. i have tested it recently and found it operational. Where is your https end point and is it functioning properly, meaning can you send some requests directly and see being processed?
Pavlo Barvinko
@pavlo16_gitlab
@DeleurThomas_twitter Interesting. I have found out that each attempt to forward a statement produces "Cannot read property \'_id\' of null" in the worker logs. There is a fix, proposed at the bottom of the thread at github: LearningLocker/learninglocker#1251. So I wonder what is different between mine installation and your that it works for you and produces the error for me..
and, yes, the direct requests to the endpoint are received and processed properly
Thomas Deleur
@DeleurThomas_twitter
Any pointers would be appreciated! I am load testing just PUT and GET (state api), and regularly (about every 40-50 seconds), i see transactions/sec in iostat increasing to several thousands while it's in 30's otherwise. When this surge happens obviously the latency gets a hit. I am wondering if Mongo is doing some kind of a bulk update or flush at regular intervals? Any idea why do i see this behavior? I couldn't find any pointers and have been on this for a couple of days now with not much progress. Any pointers will be very helpful.
Thomas Deleur
@DeleurThomas_twitter
Screen Shot 2021-04-08 at 7.31.16 PM.png
Screen Shot 2021-04-08 at 7.31.43 PM.png
Digging this further, when the TPS goes up, i actually see less commands sent to mongo. you can see the stats above. That sudden drop in commands sent to mongo makes me wonder if the xAPI node worker is a problem. Not sure if i can check any logs to troubleshoot this further.
Screen Shot 2021-04-08 at 7.36.23 PM.png
Thomas Deleur
@DeleurThomas_twitter
when this happens, req/sec (green top) drops and latency(yellow/bottom) suffers.
Thomas Deleur
@DeleurThomas_twitter
Team, any thoughts on the above? i see requests bursts resulting in high tps. I have also disabled mongo aggregation to see if that's causing it, but result is same.
Jesus M Bianco T
@jbiancot_twitter
Hi, we are using learning locker v3.18, and today I setup another server with slightly different domain, example instead of lrs.fleetdefense.com, we are using lrs.uat.fleetfedense.com, then I am getting a bunch of request on the Nginx, and there is a lot of communication between LRS and Mongo
[03/May/2021:18:22:08 -0400] "GET /api/statements/aggregateAsync?pipeline=%5B%7B%22%24match%22%3A%7B%22timestamp%22%3A%7B%22%24gte%22%3A%7B%22%24dte%22%3A%222021-03-03T22%3A20%3A00.000Z%22%7D%7D%7D%7D%2C%7B%22%24match%22%3A%7B%22%24and%22%3A%5B%7B%7D%2C%7B%7D%5D%7D%7D%2C%7B%22%24project%22%3A%7B%22group%22%3A%22%24statement.verb.id%22%2C%22model%22%3A%22%24statement.verb%22%7D%7D%2C%7B%22%24group%22%3A%7B%22_id%22%3A%22%24group%22%2C%22count%22%3A%7B%22%24sum%22%3A1%7D%2C%22group%22%3A%7B%22%24first%22%3A%22%24group%22%7D%2C%22model%22%3A%7B%22%24first%22%3A%22%24model%22%7D%7D%7D%2C%7B%22%24sort%22%3A%7B%22count%22%3A-1%7D%7D%2C%7B%22%24limit%22%3A10000%7D%2C%7B%22%24project%22%3A%7B%22_id%22%3A1%2C%22count%22%3A1%2C%22model%22%3A1%7D%7D%5D&skip=&sinceAt= HTTP/1.1" 200 117 "https://lrs.uat.fleetdefense.com/organisation/5a05e49bb292bd5b10e982a4/settings/stores" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:88.0) Gecko/20100101 Firefox/88.0"
D QUERY [conn36] Using idhack: query: { _id: ObjectId('5a05e49bb292bd5b10e982a1') } sort: {} projection: {} batchSize: 1 limit: 1
D QUERY [conn34] Relevant index 0 is kp: { _id: 1 } unique name: 'id' io: { v: 2, key: { _id: 1 }, name: "id", ns: "learninglocker_v2.role" }
D QUERY [conn34] Only one plan is available; it will be run but will not be cached. query: { _id: { $in: [ ObjectId('5a05e49bb292bd5b10e982a5') ] } } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 }
D QUERY [conn31] Using idhack: query: { _id: ObjectId('5a05e49bb292bd5b10e982a1') } sort: {} projection: {} batchSize: 1 limit: 1
D QUERY [conn29] Relevant index 0 is kp: { _id: 1 } unique name: 'id' io: { v: 2, key: { _id: 1 }, name: "id", ns: "learninglocker_v2.role" }
D QUERY [conn29] Only one plan is available; it will be run but will not be cached. query: { _id: { $in: [ ObjectId('5a05e49bb292bd5b10e982a5') ] } } sort: {} projection: {}, planSummary: IXSCAN { _id: 1 }
D QUERY [conn32] Using idhack: query: { _id: ObjectId('5a05e49bb292bd5b10e982a1') } sort: {} projection: {} batchSize: 1 limit: 1
Ilya Shikhaleev
@ilya.shikhaleev:matrix.org
[m]
Hi! Is it possible to use SSO with Learning Locker LRS? I need to provide a direct access from my platform to lrs for users.
I found several mentions about jwt into github https://github.com/LearningLocker/learninglocker. But it looks like jwt for internal tasks, not for SSO with external platforms.
Jesus M Bianco T
@jbiancot_twitter
I don't see activity on this chat since May 18th? are people still using it????
deepak2k
@deepak2k:matrix.org
[m]
Hello! We installed Learning Locker v7.0.0 using the default installation scripts from https://docs.learninglocker.net/guides-installing/. We are able to send xAPI statements and the data shows up in the LRS stores. However, it seems, the Workers are not automatically running the STATEMENT_QUERYBUILDERCACHE_QUEUE and STATEMENT_PERSON_QUEUE, every time new statements come in. We have to manually run the 'node cli/dist/server batchJobs ...' in order to be able to query the statements in LL. We have the ALLOWED_WORKER_QUEUES setting enabled in .env, but that doesn't seem to have any effect. Are we missing anything? Any help in this matter will be greatly appreciated. Thank-you!
kchrislee
@kchrislee
error in running pm2 start pm2/all.json, status error
image.png
image.png
image.png
subhash
@dulla_gitlab
i see the below error in xapi logs, also i am getting 500 internal error when hitiing /data/xapi/statements ...etc image.png
subhash
@dulla_gitlab
@ht2 kindly provide a solution to fix this issue
Ren4tus
@Ren4tus
When I create LearningLocker with AWS, I get a "too long to respond" error. I'm using t2.medium and I set up security groups, VPCs, and Subnet. Is there a problem?
image.png
image.png
image.png
Todd McIntosh
@toddmcintosh
Hi guys. I'm just setting up New Relic monitoring for our LL instance (on Ubuntu). I've modified the 2 .env files to enable the New Relic reporting. However data seems to come through to NR as server level data and not application-level data. To compare, we have a .NET app on a separate Windows box, and we have a server agent and an application agent on that box for New Relic. How do I get LL application data to pipe into New Relic properly? Thanks guys!