@nickw1_gitlab https://hikar.org/webapp does not work on my iPhone SE (14.4.1), I can see all the labels which seem to be placed in correct direction, but the camera freezes. The same happen with red-box example in GitHub issue AR-js-org/AR.js#216. The iOS Safari console output the error:
Unhandled Promise Rejection: NotAllowedError: The request is not allowed by the user agent or the platform in the current context, possibly because the user denied permission. (aframe-ar-nft.js:210535)
When I replace "videoTexture: true;" with embedded, the camera works but not 'click' and if both "videoTexture: true;" and embedded, then the screen is totally black.
Any idea what could be the problem? Thx in advance🙏🏽
Hi everyone, I will just post this here if someone has similar issue or want to make a pull-request. I needed to display far objects (>1km) like in the Hikar example. I was fighting with
videoTexture: true because the image from camera gets squished and does not maintain aspect ratio. I wanted to maintain aspect ratio and make the image from camera cover whole area. I came up with this solution:
//main <a-scene> element - you need to wait for loading - use `loaded` event var scene = document.getElementById('scene'); //this is the extra component created by AR.js which puts camera feed into scene without making far objects invisible var webcam = document.querySelector("[arjs-webcam-texture]"); var component = webcam.components['arjs-webcam-texture']; //you also need to wait for video to start playing via `loadeddata` event or similar, otherwise the `videoWidth` and `videoHeight` is zero var viewportRatio = scene.clientWidth / scene.clientHeight; var videoRatio = component.video.videoWidth / component.video.videoHeight; var w = Math.max(1, videoRatio / viewportRatio); var h = Math.max(1, viewportRatio / videoRatio); component.geom.attributes.position.array = new Float32Array([ -w / 2, h / 2, 0, w / 2, h / 2, 0, -w / 2, -h / 2, 0, w / 2, -h / 2, 0 ]); component.geom.attributes.position.needsUpdate = true; //you should also call this code once the viewport size changes (onresize event of window)
I am not sure how this code affects placing precision of 3D content in the image (you should probably adjust FOV, but I am not that far yet - outdoor testing required).
And one small note: you should calculate with magnetic declination when displaying location based augmented content because the device's compass gives you angle to magnetic north which is not true north. You can simply calculate declination with some library (like magvar) and rotate the whole scene a bit like this:
<a-scene id="scene"> <!-- mag. declination 5 degs. --> <a-entity rotation="0 5 0" id="parent"> </a-entity> </a-scene>
You obviously need to set the rotation attribute dynamically as the declination differs in various locations. Than put all your location based content into the #parent element.
truewhen creating the
videoelement. If you look at the PR you should be able to figure out the code change you need to make, let me know if not,
<a-text>and set the
gps-projected-entity-placecomponent to the appropriate latitude and longitude. If you look at the 'connecting to a web API' example at https://ar-js-org.github.io/AR.js-Docs/location-based-tutorial/ this gives an example of what to do.
Hello Everyone -
I am looking to create a virtual gallery. I want to hang pictures and place objects on surfaces (walls, tables, outside on pavement) and also be location aware. How can I achieve this? Thought using three.js and ar.js would give something. But not sure how to proceed. Mainly surface detection using AR.js. Is it possible to do this with the below libraries?
(AR.js with Image Tracking + Location Based AR)
Thanks in advance!
im a total newbie.
but ill try to help you out. i hade very similar issues?
plus i have a question
downloaded the location code from here
uploaded it to my host
cam worked. asset loaded
then turned on gps...and then and then
only saw 1 white pixel at my feet
selected settings, top right
flipped everything on in developer mode
why not eh?
then modiefied the code below
my notes are in slashes
/id remove the notes. and the slashes. totally up to you/
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<body style="margin: 0; overflow: hidden;">
<a-scene embedded loading-screen="enabled: false;" arjs="sourceType: webcam; //////////from false to true/////// debugUIEnabled: true;" >
<video src="assets/asset.mp4" preload="auto" id="vid" response-type="arraybuffer" loop crossorigin webkit-playsinline autoplay muted playsinline ></video>
<a-video src="#vid" position='0 0.1 0' rotation="-90 0 0" look-at="[gps-camera]" videohandler smooth="true" smoothCount="10" smoothTolerance="0.01" smoothThreshold="5" autoplay="true" ///////////changed scale="1 1 1" to ...../////////// scale="12 12 12" ///////////entered my personal gps lat/long info/////////// gps-entity-place="latitude: 21.290583269952876; longitude: -157.82944697275167;" ></a-video>
<a-camera gps-camera rotation-reader></a-camera>
then, everything worked like a charm.
i'll need to adjust positions
LOVE the improvement
how can i incorperate this gps and directional data into a three.js scene?