[Romina Suarez, Ushahidi] Thinking if I can come up with some solution but for private deployments we're generally not doing embeds since they require login to see anything at all.
I think the ideal solution here would be being able to login in your site and have that reflected in ushahidi.io with a temporary token but we're not set up for that at the time (SSO could also help with this ). Asking my colleagues for ideas as well to see if we can figure something out.
[Romina Suarez, Ushahidi] re sso: we don't have any SSO solutions yet.
That's not possible right now. What we've done in the past when we have needed a quick solution for showing contents of a deployment in another site (we did this for an aggregate of deployments) is logging the user in through the authorization endpoint, and then using that token to get the data and recreate the UI component we need with the data from the API (vs just embedding) to have a better UX, but this of course has the downside of being a lot more work.
A suggestion if you want some things to be public and others to not be public (not sure if that's the case), if feasible, would be using the post statuses to keep posts private instead of using the private deployment setting and avoid the login prompt.
• published posts are visible for everyone
• draft posts are private, only for users with MANAGE POSTS permissions or higher
It really does depend on your usecase.
[Romina Suarez, Ushahidi] Totally
*anon token (for non logged in users)*
*Getting a token For a specific user:*
localhost:3000/verifierDo you get any hints there about what could be wrong?
gulp dev:verifierand on navigating to the site I receive the same response as before
Hi Angela. We are using ushahidi to map the status of infrastructure like taps, toilets and lights in informal settlements in Cape Town - with the idea of adding layers like the mapping of violent incidences like crime, fires, rape etc so that the municipality can have a set of dynamic data to work with when they plan their maintenance and upgrading in these areas.
We have set up some surveys that take in information on each item of infrastructure - it's location, its status (broken/working), some meta data around it's location, like landmarks and area name, dates etc. We have all this data prepared into a csv, which we then upload to each survey on each item. When I upload csvs that have records over 75 ish, the upload shows an error that has no message, and then when I check back on the map, the points will show on the map, but the numbers on the survey will be double, triple or even 4 or 5 times the number of records on the csv uploaded. And the when I check collections, the csv shows up the number of times that it has been duplicated. and then when I delete the records off of those csvs, so that I remain with only one of the duplicated csvs, the data on the map doesn't seem to change, but the numbers on the survey panel on the left change and the csvs under the collections never disappear, but the records within the deleted ones are gone. This issue is making it difficult to tell if the data coming through on the map is accurate as the duplications are sometimes not perfect multiples of the number of records uploaded on the csv.