Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 11:30
    swarnapolineni commented #486
  • 11:29
    swarnapolineni closed #486
  • 11:29
    swarnapolineni edited #486
  • 11:29
    swarnapolineni edited #486
  • 11:13
    swarnapolineni opened #486
  • 11:13
    swarnapolineni labeled #486
  • 11:11
    mehmetka edited #485
  • 11:09
    mehmetka labeled #485
  • 11:09
    mehmetka opened #485
  • Oct 21 19:18
    stevespringett labeled #429
  • Oct 21 19:18
    stevespringett commented #429
  • Oct 21 19:01
    0x7d7b opened #50
  • Oct 21 18:49
    0x7d7b closed #49
  • Oct 21 18:49
    0x7d7b commented #49
  • Oct 21 18:44
    rcha0s commented #429
  • Oct 21 15:05
    stevespringett commented #484
  • Oct 21 15:05

    stevespringett on master

    GitHub CI workflow: enable Java… Merge pull request #484 from su… (compare)

  • Oct 21 15:05
    stevespringett closed #484
  • Oct 21 13:48
    sullis commented #484
  • Oct 21 06:58
    sullis opened #484
dhelmr
@dhelmr
Also, we wonder if we use the project versions in the intended way. Currently we create a new version for each fixed version for the software, i.e. each state from that we want a separate analysis. But this results in a long list of project uuids in dependency track (>30 for some projects). Also, the BOMs will rarely change between these versions, so we have many versions that share the same BOM, but are represented as different project-uuids in dependency track. Is that a good practice to do, or do we have to expect performance in the future issues if we keep doing it? Is there a way to "group" versions that have the same BOM?
Steve Springett
@stevespringett

@dhelmr When BOMs are uploaded, they are processed and discarded, they are not saved. There is a REST endpoint that lets you export a project in CycloneDX format. This endpoint dynamically generates the BOM based on what’s in the current project.

You’re using project versions correctly. If theres a need to track individual releases of the same software, then the approach you’re taking is valid. If there is no need to track old versions, then you could either delete the old projects, or in v3.6 you’l be able to set projects as active or inactive. DependencyTrack/dependency-track#399

Eventually, when the UI gets a makeover, there will be a way in the UI to easily find a project and drill into all children thus removing the need to display all projects at once in a big table. DependencyTrack/dependency-track#84

Terror
@mterron
Hi, is there any way to upload a bom and block till it's processed using the API?
Steve Springett
@stevespringett
@mterron when a BOM is uploaded, a token (uuid) is returned. You can then query /bom/token/{uuid} to see if the token is being processed. The value of processing will be true if its still being worked on. The value will be false if processing is complete or the token is invalid. Once you get a false return value, you can then query the findings API (or vulnerability API) to retrieve the current results. This is essentually what the Jenkins plugin does with ‘synchronous processing mode’ is enabled.
Terror
@mterron
Thanks @stevespringett
Fadi Serhan
@fserhanwh_twitter

Hi guys, anyone get managed to enable SSL/TLS on Application level? Similar to Jenkins way of doing it without the need of deploying any Apache or Nginx in front. I am using the embedded WAR and figured out its baked on Jetty, but the application seems not to provide any arguments or support for handling SSL/TLS.

Something like this Jenkins way will not work: owasp@ip:~$ java -Xmx8G --httpPort=-1 --httpsPort=8443 --httpsKeyStore=/usr/local/share/ca-certificates/sca.jks --httpsKeyStorePassword=xxxxxxx -jar dependency-track-embedded.war

Anyone and idea? Or do I need to deploy Apache/Nginx to enable HTTPS? Application is deployed on AWS EC2 instance, thanks for help.

Steve Springett
@stevespringett
Most folks use nginx in front of dependency-track when they’re using the docker container or executable war.
Fadi Serhan
@fserhanwh_twitter
Got it, I assumed that. Thanks @stevespringett for clarification.
Terror
@mterron
Hi, anyone run into the situation where the dependencytrack container does not respect the ALPINE_DATA_DIRECTORY env variable? I see my container trying to write to ?/.dependency-track/dependency-track.log even though ALPINE_DATA_DIRECTORY=/data is set.
Terror
@mterron
The example on the DependencyTrack website fails the same way docker run -it --rm --user=12345 -v dependency-track:/data owasp/dependency-track. Seems to be related with the user it is running under.
Steve Springett
@stevespringett
@mterron the logging framework is independent from the ALPINE_DATA_DIRECTORY. Refer to a similar question here: DependencyTrack/dependency-track#387
Under most situations, you should not need to modify the data directory for a container. Simply specify the location of where you want the volume to reside, but keep the mount point inside the container as /data
Terror
@mterron
I made some progress, chmod -R 777 /opt fixes the initial error. I've sent a PR with some fixes to the Dockerfile @stevespringett
Terror
@mterron
Any way to set the CA to be used to verify ldaps connections?
Steve Springett
@stevespringett
Root certificates can be added to the Java Trust Store. However, with Docker, that means you’ll need to create your own container as Java doesn’t provide easy ways to do this. However, with Alpine, if you use an ldap:// url with a secure port, then you’ll need the certificate in the Java Trust Store as it will fail validation. If however, you use an ldaps:// url, then Alpine will use a relaxed SSL/TLS policy and will not perform validation. If however, you want to perform specific validations, that requires modification to the Java Trust Store and you’ll need to roll your own Docker image.
Terror
@mterron
I'll give ldaps:// a spin. Thank you @stevespringett !
On that second case, it doesn't do cert validation or just use the system CAs?
Robb Hill
@vortextube
Could of questions I coudn't find in the docs. 1) I see the download of vulnerabilities etc. when I start the service. Should that be on first start or every start? Also once started does it update periodically? 2) Given the system is aware of a component (test.component version 1.0 e.g.) from a scan on 1/1/2019, then without an additional upload of my bom, on 1/2 a critical vulerability comes out for that component, will Dependency Track figure that out and add a new issue?
Terror
@mterron
1) It depends if you have persistent storage or not I guess. It is updated periodically. 2) Yes
Robb Hill
@vortextube
On #1, any way to see the frequency of updates or know when it does update? I can't see any settings or events in the application that indicates activity.
On #2 are there any details anywhere or app code I can take a look at that would help me understand the applications expected behavior?
Terror
@mterron
@vortextube Yes to both. DT is hosted on github and also there's a full blown documentation website.
Jon Brohauge
@jonbrohauge
Anybody have an example of how to implement pagination via the component API?
Steve Springett
@stevespringett
@jonbrohauge you can use page/size as query strings, or offset/limit as query string.
Terror
@mterron
Can I define more than 1 server on LDAP_URL to use for failover purposes?
Steve Springett
@stevespringett
@mterron No, that’s currently not possible and will likely not be a roadmap item. The next auth enhancments will be around oauth with oidc.
Jon Brohauge
@jonbrohauge
Couldn't find a maven plugin for Dependency-Track, so we made our own and open sourced it. https://github.com/topdanmark/dependency-track-maven-plugin
Steve Springett
@stevespringett

@jonbrohauge A community developed Maven plugin already exists. A link to the repo is on the OWASP Dependency-Track wiki. https://www.owasp.org/index.php/OWASP_Dependency_Track_Project

https://github.com/pmckeown/dependency-track-maven-plugin

What does the version from Topdanmark do that is unique/different from the existing one?

Jon Brohauge
@jonbrohauge
@stevespringett Thanks for the heads up, I was not aware of the plugin. Must have missed it somehow when searching for a plugin. Will definitely take a closer look to see if it serves our needs.
Jon Brohauge
@jonbrohauge
If the commit dates are correct, it looks like we started our plugins at the same time, more or less.
Robb Hill
@vortextube

I see some Maven issues related to xml-apis, anyone see these with the gradle plugin? CycloneDX: Creating BOM

:cyclonedxBom (Thread[Task worker for ':',5,main]) completed. Took 0.602 secs.

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':cyclonedxBom'.

    org/w3c/dom/ElementTraversal

Anyone else run into this issue?
Barath Subramaniam
@baraths84
@stevespringett - Need some inputs to understand on VulnDB through RiskBasedSecurity "Dependency-Track can leverage VulnDB by incorporating the entire contents of the VulnDB service. In doing so, VulnDB data becomes a first-class citizen in Dependency-Track working alongside other sources of data to identify risk."
Does it mean it erases the current data feed "Which shows 130495 counts "on the UI "Vulnerabilities" and brings complete new set from VulnDB
Steve Springett
@stevespringett
Enabling VulnDB will import over 200K new vulnerabilities. All exising vulns from NVD, NPM, etc remain unchanged. VulnDB would be really good for asset vulnerabilities (applications, operating systems, servers, etc). I don’t think it will be overly valuable for libraries at this point, since VulnDB does not support PackageURL
I think they have over 60K vulns that are not listed in the NVD
Also, if you’re only showing 130495 vulns, that is incomplete. Can’t remember exactly, but there are over 160K in the NVD. You can force a reimport by wiping out the ~/.dependency-track/nist directory. The next time the server starts (or the next time it syncs - which is every 24 hours) it will grab all vulns from NVD giving you the complete list
Barath Subramaniam
@baraths84
Thank you @stevespringett for your inputs on VulnDb and I will cleanup wipe directory to fix the count. Are all feeds extracted from this location and imported to platform > https://nvd.nist.gov/vuln/data-feeds . ( JSON feeds) . I am thinking if there is a way to see if i can parse the count and always ensure if the count is accurate in deployed track platform ( possibly developing a utility) . My Depdency Track is up and running for last few weeks, i am not sure why big count difference.. or if its a one off interim sync up issue
Barath Subramaniam
@baraths84
@stevespringett - like to confirm with you on the Vulnerability count. Even after force reimport - i still see the count to be 130797 . Can you please suggest any other steps - if there is anything i am missing to get this fixed. Dependency Check is Disabled - does it have any impact to this count.
Steve Springett
@stevespringett
My fault. That number is correct. I’ve been working with VulnDB for so long that I got used to having a much larger count. 130k is correct
Barath Subramaniam
@baraths84
@stevespringett np thanks for confirming !
Patrick Dwyer
@patros
Anyone here with LDAP experience? I have it all working except mapped LDAP groups. I can see a list of groups, and can add a group as a mapped group. But I don't get any of the permissions of that team when I login. I'm using Okta (doco https://help.okta.com/en/prod/Content/Topics/Directory/LDAP_Using_the_LDAP_Interface.htm). And my configured settings (some values replaced with "*")...
{
"name": "ALPINE_LDAP_ATTRIBUTE_MAIL",
"value": "email",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_ATTRIBUTE_NAME",
"value": "uid",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_AUTH_USERNAME_FORMAT",
"value": "%s",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_BASEDN",
"value": "dc=*,dc=okta-emea,dc=com",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_BIND_PASSWORD",
"value": "=*",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_BIND_USERNAME",
"value": "uid==*,dc==*,dc=okta-emea,dc=com",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_ENABLED",
"value": "true",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_GROUPS_FILTER",
"value": "(&(objectClass=groupOfUniqueNames))",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_GROUPS_SEARCH_FILTER",
"value": "(&(objectClass=groupOfUniqueNames)(cn={SEARCH_TERM}))",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_SECURITY_AUTH",
"value": "simple",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_SERVER_URL",
"value": "ldaps://=*.ldap.okta-emea.com:636",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_USER_GROUPS_FILTER",
"value": "(&(objectClass=groupOfUniqueNames)(uniqueMember={USER_DN}))",
"slotSetting": false
},
{
"name": "ALPINE_LDAP_USERS_SEARCH_FILTER",
"value": "(&(objectClass=inetOrgPerson)(cn={SEARCH_TERM}))",
"slotSetting": false
},
Steve Springett
@stevespringett
@patros Okta LDAP may or may not work. It was not tested nor do I have a way to test. You might want to set the logging level to DEBUG to provide more information. Also, the Slack channel has a lot more members if you want to reach out there as well.
Patrick Dwyer
@patros
Thanks @stevespringett. I'll repost in Slack after getting a bit more logging output. Meanwhile, that Slack invite link no longer works.
Terror
@mterron
@stevespringett can you open invites to Slack? I can't join
Steve Springett
@stevespringett
It seems the OWASP invites have expired - they do occassionly. I’m waiting for an OWASP Slack admin to regenerate a new invite code. Other projects have recently complained of the issue as well.
Steve Springett
@stevespringett
OWASP admins have regenerated a new invite code. https://dependencytrack.org/slack/invite should now work again
Barath Subramaniam
@baraths84
@stevespringett /All - reported a bug under sonatype for a finding which i observed via dependency track - OSSIndex/vulns#35 thought of sharing with you for any inputs/suggestions on this. Thank you!