cnishina on selenium4
chore(release): bumpb version t… (compare)
—no-sandbox
too. Without it, it wasn’t working since im using Docker. Im not sure if you will need it. As well, for some reason it only worked after i changed the window size. My test used to fail on some windows with some window sizes, and work with others...(im not at my desk currently, i can send you more info once im there...)
So @Priyaa15 Im using below code for the headless chrome.
capabilities: {
browserName: 'chrome',
chromeOptions: {
//args: ['--no-sandbox', '--headless']
args: ["--no-sandbox", "--headless", "--disable-gpu", "--window-size=1920x958"]//working
//args: ["--no-sandbox", "--headless", "--disable-gpu", "--window-size=1280x648"]
//args: ["--no-sandbox","--headless","--disable-gpu", "--window-size=1920x1080"]//working
}
},
As you can see, Im trying different window sizes, to see which one will work. So far, it worked with (1920x1080 & 1920x958), but failed with (1280x648). The tests are working perfectly on my local laptop, but when i run the test from Jenkins side, for some reason, it is sometimes passing and sometimes failing. Im still trying to figure out what is the reason...
Hello guys, I was making a little protractor demo and the dev lead suggested to update the application html code to include a fixed 'attribute:value' pairs on all fields, so my test remains valid in case any change took place on any class, buttons names, etc...
so based on the above, i then, in my tests, I will be calling all elements on the application using the css locator (so all my tests will be written using by.css) in this way: element(by.css("tagname[attribute='value']")
the reason for this suggested change is to make our automated test stable, and not affected by any code changes...so im wondering if this is an acceptable practice? (I know that it will make my life much easier as an automation tester, but at the same time i'm finding it weird and i'm wondering if there are any disadvantages that im not yet aware of?) - let me know what you think
((Im thinking that such a change would put the app security at risk too, no?))
[16:38:15] E/configParser - Error code: 105
[16:38:15] E/configParser - Error message: configuration file conf.js did not export a config object
[16:38:15] E/configParser - Error: configuration file conf.js did not export a config object
at ConfigParser.addFileConfig (/usr/local/lib/node_modules/protractor/built/configParser.js:141:19)
at Object.initFn [as init] (/usr/local/lib/node_modules/protractor/built/launcher.js:93:22)
at Object.<anonymous> (/usr/local/lib/node_modules/protractor/built/cli.js:226:10)
at Module._compile (internal/modules/cjs/loader.js:702:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:713:10)
at Module.load (internal/modules/cjs/loader.js:612:32)
at tryModuleLoad (internal/modules/cjs/loader.js:551:12)
at Function.Module._load (internal/modules/cjs/loader.js:543:3)
at Module.require (internal/modules/cjs/loader.js:650:17)
at require (internal/modules/cjs/helpers.js:20:18)
npm run build:e2e
which would create an e2e/dist
folder that contains a ready to go .js file that you can run with one command
count
capability - like it can't tell the difference between the tests. count: 2
runs a test twice, and a jasmine reporter will output 2 specStarted
's, but they have the same id :/ (so later if I wanted to associate the test's specDone
or other jasmine output back to the same test, I couldn't){
"id": "spec0",
"description": "",
"fullName": "",
"failedExpectations": [],
"passedExpectations": [],
"pendingReason": ""
}, {
"id": "spec0",
"description": "",
"fullName": "",
"failedExpectations": [],
"passedExpectations": [],
"pendingReason": ""
}
{"id":"spec0","description":"should add one and one","fullName":"Protractor Demo App should add one and one","failedExpectations":[],"passedExpectations":[{"matcherName":"toEqual","message":"Passed.","stack":"","passed":true}],"pendingReason":"","status":"passed"}
{"id":"spec1","description":"should add one and two","fullName":"Protractor Demo App should add one and two","failedExpectations":[],"passedExpectations":[{"matcherName":"toEqual","message":"Passed.","stack":"","passed":true}],"pendingReason":"","status":"passed"}
{"id":"spec2","description":"should add one and three","fullName":"Protractor Demo App should add one and three","failedExpectations":[],"passedExpectations":[{"matcherName":"toEqual","message":"Passed.","stack":"","passed":true}],"pendingReason":"","status":"passed"}
{"id":"spec3","description":"should add one and Four","fullName":"Protractor Demo App should add one and Four","failedExpectations":[],"passedExpectations":[{"matcherName":"toEqual","message":"Passed.","stack":"","passed":true}],"pendingReason":"","status":"passed"}
{"id":"spec4","description":"should add one and five","fullName":"Protractor Demo App should add one and five","failedExpectations":[],"passedExpectations":[{"matcherName":"toEqual","message":"Passed.","stack":"","passed":true}],"pendingReason":"","status":"passed"}
// .spec.ts file
describe('example test', () => {
process.env.instanceId = generateGuid();
console.log('describe instanceId: ' + process.env.instanceId);
it('run test steps', () => {
// do stuff
});
});
// jasmine reporter
export class CustomJasmineReporter implements CustomReporter {
specStarted(data: any) {
data.instanceId = process.env.instanceId;
console.log(`specStarted: ${JSON.stringify(data)}`)
}
specDone(data: any) {
data.instanceId = process.env.instanceId;
console.log(`specStarted: ${JSON.stringify(data)}`)
}
}