by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Renato Cavalcanti
@renatocaval
the sample config is super minimal
Arsene
@Tochemey
@renatocaval I did some observation when I run the app with normal sbt runAll it did work. However when I run it from inside docker I got that error
Renato Cavalcanti
@renatocaval
ah ok
that must be something else
maybe your dockers are not seeing each other (assuming your postgres is running on another docker)
at bootstrap we make the JNDI resource
it should throw an exception, but maybe there is a bug and you are not seeing the exception
then, when the plugin tries to read the JNDI resource it doesn't find it
because we failed to create it on bootstrap
check if you see other exceptions on bootstrap
Arsene
@Tochemey
@renatocaval this is my docker-compose:
version: '3.5'
services:
  postgres:
    container_name: postgres_container
    image: postgres
    environment:
      POSTGRES_USER: ${POSTGRES_USER:-postgres}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
      PGDATA: /data/postgres
    volumes:
      - postgres:/data/postgres
    ports:
      - "5432:5432"
    networks:
      - postgres
    restart: unless-stopped
  pgadmin:
    container_name: pgadmin_container
    image: dpage/pgadmin4
    environment:
      PGADMIN_DEFAULT_EMAIL: "admin@namely.com"
      PGADMIN_DEFAULT_PASSWORD: "admin"
    volumes:
      - pgadmin:/root/.pgadmin
    ports:
      - 5050:80
    networks:
      - postgres
    restart: unless-stopped
  app:
    image: hseeberger/scala-sbt
    working_dir: /app
    command: sbt runAll
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: changeme
      POSTGRES_HOST: postgres
    volumes:
      - ./:/app
    networks:
      - postgres
    ports:
      - 9000:9000
networks:
  postgres:
    driver: bridge
volumes:
  postgres:
  pgadmin:
Arsene
@Tochemey
@renato This is some of the log at startup:
 [info] play.api.db.HikariCPConnectionPool [] - datasource [default] bound to JNDI as DefaultDS
brabo-hi
@brabo-hi
can we serialize events and command replies using Jackson instead of play-json ?
1 reply
Arsene
@Tochemey
@brabo-hi please read the doc. That is the recommended way
mohammad-shafahad
@mohammad-shafahad
can any please provide me pdf books for Scala based rest API
Nikhil Arora
@nikhilaroratgo_gitlab
@renatocaval Hello Renato, once again I need some help. I added dependency of javadsl-pubsub_2.12-1.5.4 in my project and I can Inject PubSub in the classes I want and the App runs fine. But in my tests where I initialise the app with ServiceTest.withServer which initialise the Module and classes, then I get the error: No implementation for com.lightbend.lagom.javadsl.pubsub.PubSubRegistry was bound.
while locating com.lightbend.lagom.javadsl.pubsub.PubSubRegistry , I then tried to mock PubSubRegistry but then I need to do a lot more mocking which gets me to the point that Am I doing something wrong ? Please help me out in this case
1 reply
David Leonhart
@leozilla
Can anybody tell me where I can find the roadmap for lagom?
Per Wiklander
@PerWiklander

I’m migrating to Lagom 1.6 and want to take advantage of Akka Persistence Typed. I need to know If I can refactor one entity at a time or have to go all in. I’m finding conflicting documentation on this.

1: "Note: the only limitation when migrating from from Lagom Persistence (classic) to Akka Persistence Typed is that a full cluster shutdown is required. Even though all durable data is compatible, Lagom Persistence (classic) and Akka Persistence Typed can’t coexist.”
https://www.lagomframework.com/documentation/1.6.x/scala/MigratingToAkkaPersistenceTyped.html#Migrating-to-Akka-Persistence-Typed

2: “Akka Persistence can coexist with existing persistent entities, and the same read-side processor and topic producer APIs fully support both types of entities”
https://www.lagomframework.com/documentation/1.6.x/scala/Highlights16.html

My (optimistic) guess here is that I should read the first quote as “A full cluster shut down is needed, otherwise both old and new versions of the same entity would be running in the cluster during deployment"
Ignasi Marimon-Clos
@ignasi35
The second sentence is missleading and needs a rewrite. The situation is this: you can’t have some nodes no the cluster reading a journal and using the events to rehydrate a Lagom Persistent Entity while other node in the cluster read the journal and use th events to rehydrate an Akka Persistence Typed Entity. At same time, you can’t have a client sending a command using Lagom’s Persistence API to a node where the entity is managed by Akka Persistence Typed. In that sense the two implementations can’t coexist. Where coexistence does happen is between rwrite-side and read-side. Meanig, even when all your write side entities are implemented in Akka Persistence Typed, your read sides can still be lagom read sides (readSide processors or topic producers. This coexistence, though, only can happen if you follow certain restrictions wrt names of the persistence IDs, tagging strategies, etc… (all the details in the docs).
Per Wiklander
@PerWiklander
Please clarify if my last statement above is correct or not. Is this per application/cluster or per entity class in the cluster?
As you see, this is aparantly a complicated subject. The docs should say something along the lines of “Note: When migrating to Akka Typed, you have to migrate all your persistent entites to Akka Typed in one go before you can deploy the application”, iff that is the case.
Ignasi Marimon-Clos
@ignasi35
The restriction is per entity.
IIRC, you can have a single lagom service using a Lagom Persistent Entity and an Akka Persistence Typed Entity running side by side (even sharing the same Journal table in the database.
Per Wiklander
@PerWiklander
@ignasi35 Thanks! That was what I thought it meant, but I think it’s important to write the documentation as simple and clearly as you just did.
mosaic-ashalatha-s
@mosaic-ashalatha-s

Does anyone know how we can load Additional Configuration class in lagom java project ? Here Additional Configuration(ProvidesParameterStoreConfiguration) class written in scala using " com.lightbend.lagom.scaladsl.api.{AdditionalConfiguration, ProvidesAdditionalConfiguration} classes "and this we can load in scala lagom project using below piece of code.

class FooServiceLoader extends LagomApplicationLoader {

  // Production mode configuration
  override def load(context: LagomApplicationContext): LagomApplication =
    new FooApplication(context) with ProvidesParameterStoreConfiguration

  // Development mode configuration
  override def loadDevMode(context: LagomApplicationContext): LagomApplication =
    new FooApplication(context) with LagomDevModeComponents

  override def describeService = Some(readDescriptor[FooService])
}

abstract class FooApplication(context: LagomApplicationContext) extends LagomApplication(context)

My question is how can we load this Additional Configuration class(ProvidesParameterStoreConfiguration) in java lagom project ? Thank you

Sergey Morgunov
@ihostage
@mosaic-ashalatha-s For Java need to use a Guice modules. Or I not understand a question.
mosaic-ashalatha-s
@mosaic-ashalatha-s

@ihostage Thanks for the response .
Tried with below code but didn't work.

public class FooModule extends AbstractModule implements ServiceGuiceSupport {

  private final Environment environment;
  private final Config config;

  public FooModule(Environment environment, Config config) {
    this.environment = environment;
    this.config = config;
  }

  @Override
  protected void configure() {
    if (environment.isProd()) {
      bind(ServiceLocator.class).to(ProvidesParameterStoreConfiguration.class);
    }
  }
}

and ProvidesParameterStoreConfiguration class below

trait ProvidesParameterStoreConfiguration
  extends ParameterStoreConfigurationComponents
  with ProvidesAdditionalConfiguration {

  /** Appends the resolved parameter store configuration to any existing additional configuration. */
  override def additionalConfiguration: AdditionalConfiguration =
    super.additionalConfiguration ++ parameterStoreConfiguration.underlying
}
Sergey Morgunov
@ihostage
Hmmm... I don't understand, what you try to do. For the production environment needly to use Akka Discovery. https://www.lagomframework.com/documentation/current/java/AkkaDiscoveryIntegration.html#Using-Akka-Discovery
Moreover, ServiceLocator will be removed in the future in favor of Akka Discovery lagom/lagom#1927
mosaic-ashalatha-s
@mosaic-ashalatha-s
ok.. This class ProvidesParameterStoreConfiguration will load AWS Parameter Store Configuratios in Production mode...
Sergey Morgunov
@ihostage
Can you share an example of parameters? Akka Discovery has a module integration with AWS https://doc.akka.io/docs/akka-management/current/discovery/aws.html
mosaic-ashalatha-s
@mosaic-ashalatha-s
ok..We are going store these parameters in AWS
db.default.username
db.default.password
db.default.url
lagom.broker.kafka.brokers etc..
Sergey Morgunov
@ihostage
Can you mount AWS parameters as a file and just include it in production configuration?
mosaic-ashalatha-s
@mosaic-ashalatha-s
ok..As of now we are storing these parameter in production config file but we wanted to store in AWS
Sergey Morgunov
@ihostage
It's not a problem if you can mount it as a file. We use the same approach in K8S.
mosaic-ashalatha-s
@mosaic-ashalatha-s
ok..Thank you:)
Loup
@Ephasme
Hi, I find it difficult to start building an application with Lagom since there is not example of "real" application and the documentation is not extensive on how to go past the very small scope of Hello World. Do you have any advice on how to go beyond one entity in the domain model to many with autentication, filters,etc?
Nikhil Arora
@nikhilaroratgo_gitlab
@Ephasme This has some sample applications https://github.com/lagom/lagom-samples
Loup
@Ephasme
@nikhilaroratgo_gitlab thanks a lot, I knew that repository but I was looking for something that is actually bigger with complex domain models, business rules validations etc. I'm working on an application that will be managing personal finances and I'm a bit lost in the way I'm suppose to build it.
Nikhil Arora
@nikhilaroratgo_gitlab
Is it possible to do event deletion in lagom persistent entitites ? similar to what is mentioned here https://doc.akka.io/docs/akka/current/typed/persistence-snapshot.html#event-deletion
Ignasi Marimon-Clos
@ignasi35
Hi @Ephasme ! We’re no longer maintaining them but https://github.com/lagom/online-auction-scala/ is a rather feature-rich sample application showcasing multiple features of lagom implementing an ebay-clone. Hope that helps a bit.
Loup
@Ephasme
@ignasi35 thanks ! That is such a shame that you do not maintain it... it's such a valuable source of information and might help Lagom adoption for people who would like to jump in but are initimidated by the lack of feature rich examples.
6 replies
One more question: how do you test the read-side processor? I have a read-side that is supposed to write entries to Cassandra, I would like to write a test that ensure that sends an event to the read-side and then check in Cassandra that the row has indeed been written.
4 replies
lapidus79
@lapidus79
In Lagom the hikariCP keeps logging HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection@389b87ec (This connection has been closed.). Possibly consider using a shorter maxLifetime value and it really doesn't seem to matter what we change the maxLifetime value to.
André Schmidt
@scalamat
Some here to help me at my runAll Problem? Using Windows and wants to start a run. Always on runAll I get the error 10:17:02.803 [error] akka.io.TcpListener [akkaAddress=akka://security-impl-internal-dev-mode, sourceThread=security-impl-internal-dev-mode-akka.actor.internal-dispatcher-3, akkaSource=akka://security-impl-internal-dev-mode/system/IO-TCP/selectors/$a/0, sourceActorSystem=security-impl-internal-dev-mode, akkaTimestamp=08:17:02.793UTC] - Bind failed for TCP channel on endpoint [/127.0.0.1:50491] java.net.BindException: [/127.0.0.1:50491] Address already in use: bind
5 replies
André Schmidt
@scalamat
Hello again. Do you have any tutorial for an single jar deployment. I mean all µ-services in one jar or one docker? And or is their any tutorial to build all µ-services and link them manual?
Ignasi Marimon-Clos
@ignasi35
Hi @scalamat, there’s no such tutorial because that’s not supported. Running all service on a single JVM requires some Classloader separation since each service is expected to run in islation. But packaging together beats the purpose of micro-services anyway.
André Schmidt
@scalamat
Hi @ignasi35 thank you. So it is important to run each µ-service on his on jvm, correct? But i can run all µ-services on a single physical machine. In this case is it neccessary to link each µ-service or will akka find them self like on multiple machines. I know that runs all together on one jvm is equal a monolith (more or less). But for the first version of my case it is better to setup all µ-services one a single machine and then to move them outside.
mosaic-prateek-singh
@mosaic-prateek-singh
Hi All,
I getting this error in my lagom java project whenever i am start or deploy into the kubernete cluster. Please help me to know the reaseon :
1UTC] Restarting graph due to failure. stack_trace:
lien-service-5b55c8698-lspqp lien-service java.util.concurrent.TimeoutException: The first element has not yet passed through in 5000 milliseconds.
lien-service-5b55c8698-lspqp lien-service at akka.stream.impl.Timers$Initial$$anon$1.onTimer(Timers.scala:62)
lien-service-5b55c8698-lspqp lien-service at akka.stream.stage.TimerGraphStageLogic.onInternalTimer(GraphStage.scala:1601)
lien-service-5b55c8698-lspqp lien-service at akka.stream.stage.TimerGraphStageLogic.$anonfun$getTimerAsyncCallback$1(GraphStage.scala:1590)
lien-service-5b55c8698-lspqp lien-service at
Thanks