Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 03 13:00
    symisc commented #16
  • Sep 03 12:42
    pmudry commented #16
  • Aug 26 22:25
    symisc closed #17
  • Aug 26 22:25
    symisc commented #17
  • Aug 26 18:52
    syedmustafa54 opened #17
  • Aug 22 11:44
    pmudry commented #16
  • Aug 22 11:44
    pmudry commented #16
  • Aug 22 07:38
    symisc commented #16
  • Aug 22 07:34
    pmudry commented #16
  • Aug 22 01:00
    symisc commented #16
  • Aug 21 21:03
    pmudry opened #16
  • Jul 11 01:28
    symisc closed #15
  • Jul 11 01:28
    symisc commented #15
  • Jul 10 21:20
    ridams commented #15
  • Jul 10 16:22
    symisc commented #15
  • Jul 10 14:03
    ridams opened #15
  • Jul 04 16:41
    symisc commented #13
  • Jul 03 03:09
    symisc commented #14
  • Jul 02 18:05
    TrevorHeyl closed #14
  • Jul 02 18:05
    TrevorHeyl commented #14
Symisc Systems
@symisc
sod_img color channels are always inverted and you should convert them to raw blobs via sod_image_to_blob() if you plan to interact with external sources such openCV IPL_Img or PIL. Feel free to contact support@pixlab.io if you want to discuss the internal of the library
Ross Leitch
@balbatross
It turned out part of the issue I had was that I was supplying values in the range 0-255 not 0-1 thanks for the response though, I'll flick you an email to stay in touch about the python wrapper I'm working on, it might work out in our best interests to collaborate on a single python lib for sod (saw your post in hacker news saying python compatibility was coming)
Ross Leitch
@balbatross
I'm trying to use the realnet calls but it's leading to some weird errors, either the detection runs fine but returns no results or if I try resize the src image before handing it over I get a segfault
Symisc Systems
@symisc
Could you share the C code snippet. Realnet works only with grayscale images and frontal faces only.
Ross Leitch
@balbatross
cdef sod_img gImg = input.img;
         cdef unsigned char *zBlob = sod_image_to_blob(gImg);
         cdef sod_box *boxes;
         cdef int nbox, rc;
         discovered = []
         rc = sod_realnet_detect(self.net, zBlob, input.img.w, input.img.h, &boxes, &nbox);
         if(rc != 0):
            print("Error performing detection");
          if(nbox > 0):
              if(nbox == 1):
                  discovered.append(Discovered(boxes.x, boxes.y, boxes.w, boxes.h))
             elif(nbox > 1):
                  for ix in range(nbox):
                      discovered.append(Discovered(boxes[ix].x, boxes[ix].y, boxes[ix].w, boxes[ix].h))
          sod_image_free_blob(zBlob);
          return (discovered, nbox)
This is the code that will cause either a segfault or return no faces. This code works on my Macbook but changing nothing and running it on the pi and it starts causing issues
Ignore the cdef stuff that's just for Cython
Symisc Systems
@symisc
If it segfault on your PI only it is probably run
Ross Leitch
@balbatross
?
Symisc Systems
@symisc
Running out of memory either stack or heap allocated
Ross Leitch
@balbatross
Yeah possibly, I've noticed the segfaults are sometimes more runtime related like that. What about when it runs and returns no found faces?
Image is grayscale
Tried a few different dimensions too
Symisc Systems
@symisc
Where is code you used to convert an input image to the grayscale colorspace
Realnet works best with frontal faces captured directly from video stream. It is recommended that you rely on the CNN model if you plan to work with small & inclined faces under different conditions
Ross Leitch
@balbatross
It's in another class but it's just using the sod img to grayscale function. I'm capturing from a video stream which is why im using realnet, the CNN model is far too slow for what i'm trying to accomplish
it's nice and functional when testing it but I can't get it working properly to even see if it's worthwhile using it on the pi
Symisc Systems
@symisc
If you are capturing video stream directly and no faces were detected with realnet then probably something is wrong with the sod_img pixel conversion and the raw pixels from the video capture.
Symisc Systems
@symisc
Did you use the OpenCV video capture interfaces and convert back an IPL image to sod_img via sod_img_load_from_cv_stream()?
Ross Leitch
@balbatross
I tried to use the CV methods but the API you're using doesn't work well with Cython so I ended up replicating the load from memory assignment and adjusted it for my buffer layout
Symisc Systems
@symisc
I guess you did something wrong with your direct pixel manipulation
Ross Leitch
@balbatross
I can't have because the same code works on my laptop
and I've checked the images are valid by writing them to disk
soham24
@soham24
Hello, I am trying to run this on android. Do you have any sample code for it? I have compiled sod.c , sod.h and both writer and reader files. But always getting 0 nBox
from sod_realnet_detect function.
Symisc Systems
@symisc
I'm guessing that you did not read the target image properly. How did you manage to read the image from your Android device and convert it back to grayscale. could you share some code?
soham24
@soham24
Sure. I will share my code
soham24
@soham24

I need to have have images in mat format to process further.
ConvertBitmapToRGBAMat function works correctly as I have verified it by saving generated grayscale image on device.
Giving my code below.

extern "C"
JNIEXPORT jobject JNICALL
Java_com_sodDetection(JNIEnv *env, jobject instance, jobject bitmap) {

cv::Mat rgbaMat;
cv::Mat grayMat;
jniutils::ConvertBitmapToRGBAMat(env, bitmap, rgbaMat, true);
cv::cvtColor(rgbaMat, grayMat, cv::COLOR_RGBA2GRAY);



sod_realnet *pNet; /* Realnet handle */
int i,rc;
/*
 * Allocate a new RealNet handle */
rc = sod_realnet_create(&pNet);
if (rc != SOD_OK) return reinterpret_cast<jobject>(rc);
/*
 * Register and load a RealNet model.
 * You can train your own RealNet model on your CPU using the training interfaces [sod_realnet_train_start()]
 * or download pre-trained models like this one from https://pixlab.io/downloads
 */
rc = sod_realnet_load_model_from_disk(pNet, "/storage/emulated/0/face.realnet.sod", 0);
if (rc != SOD_OK) return reinterpret_cast<jobject>(rc);


IplImage* imgipl = new IplImage(grayMat);
sod_img img = sod_img_load_cv_ipl(imgipl);

/// sod_img img = sod_img_load_grayscale(zFile);

if (img.data == 0) {
    puts("Cannot load image");
    return 0;
}



unsigned char *zBlob = sod_image_to_blob(img);
/*
 * Bounding boxes array
 */
sod_box *aBoxes;
int nbox;
/*
 * Perform Real-Time detection on this blob
 */
rc = sod_realnet_detect(pNet, zBlob, img.w, img.h, &aBoxes, &nbox);
if (rc != SOD_OK) return reinterpret_cast<jobject>(rc);
/* Consume result */
printf("%d potential face(s) were detected..\n", nbox);
for (i = 0; i < nbox; i++) {
    /* Ignore low score detection */
    if (aBoxes[i].score < 5.0) continue;
    /* Report current object */
    printf("(%s) x:%d y:%d w:%d h:%d prob:%f\n", aBoxes[i].zName, aBoxes[i].x, aBoxes[i].y, aBoxes[i].w, aBoxes[i].h, aBoxes[i].score);
    /* Draw a rose box on the target coordinates */
 //   sod_image_draw_bbox_width(color, aBoxes[i], 3, 255., 0, 225.);
    //sod_image_draw_circle(color, aBoxes[i].x + (aBoxes[i].w / 2), aBoxes[i].y + (aBoxes[i].h / 2), aBoxes[i].w, 255., 0, 225.);
}

// std::cout<<"time daken to detect "<<(clock()-(float)t)/CLOCKS_PER_SEC<<std::endl;
/ Save the detection result /
// sod_img_save_as_png(color, "/storage/emulated/0/yoyo.png");
/ cleanup /
sod_free_image(img);
// sod_free_image(color);
sod_image_free_blob(zBlob);
sod_realnet_destroy(pNet);

return  bitmap;

}

Symisc Systems
@symisc
Try to remove the threshold condition (aBoxes[i].score < 5.0) and check the result. Also does sod_img_load_cv_ipl() save the la image on disk correctly?
Try to save grayMat on disk and load it via sod_img_load_grayscale()
soham24
@soham24
I am getting 0 nBoxes in previous step before setting threshold.
also I tried to load grayscale image from disk directly via sod_img_load_grayscale() this function. but still no detection.
Symisc Systems
@symisc
Then probably no faces were detected. The model you purchased is specialized in detecting frontal faces only such as one from your Webcam or smartphone camera stream. I wonder which image did you use to test the model.
soham24
@soham24
I have used same image. i.e. image taken from smartphone which has frontal face.
soham24
@soham24
Do you have any sample code for android?
Symisc Systems
@symisc
Nope
Symisc Systems
@symisc
WebAssembly, Real-time face detection model is now available to the public. Check https://github.com/symisc/sod/tree/master/WebAssemby for additional information!
Symisc Systems
@symisc
The SOD development team just published a new computer vision article on how to detect vehicles registration plates without heavy Machine Learning techniques, just standard image processing routines already implemented in the SOD library. The article is available to consult here at: https://sod.pixlab.io/articles/license-plate-detection.html.
jjqcat
@jjqcat
multi-core CPU supports?
jjqcat
@jjqcat
how to train multi-classes for SOD RealNets? thanks
ridams
@ridams
does someone know how to convert an AVFrame (pixfmt= rgb24) to sod_img ?
Symisc Systems
@symisc
@jjqcat Multi-core CPU support is available only for the commercial version of SOD. You can take a look at https://pixlab.io/downloads for additional information.
Symisc Systems
@symisc
@jjqcat SOD RealNets supports only one class per training cycle but you can stack multiple models such as pedestrian and car detector in a single run via sod_realnet_load_model_from_disk() thanks to their fast processing speed!
Symisc Systems
@symisc
@ridams Take a look at the response for your opened issue on how to do that: symisc/sod#15
SYED
@syedmustafa54
hello is there any python api code for this ??
any way to download face detection model for free
Pre-trained RealNets Models
Symisc Systems
@symisc
@syedmustafa54 only C/C++ but you can search Github for forign bindings. No, you have to purchase from https://pixlab.io/downloads in order to use the face model.