Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 23:16

    maltanar on vitis_hls

    [Concat] fix stitching for Viti… (compare)

  • 18:35
    rpitonak opened #455
  • 12:34

    maltanar on vitis_hls

    [Deps] update to xdist 2.4.0 to… (compare)

  • 11:02

    maltanar on vitis_hls

    [MaxPool] switch to non-_Batch … (compare)

  • 09:47

    maltanar on vitis_hls

    [Concat] first sketch of HLSCus… [Concat] fixes to concat op, ad… [Concat] add InferConcatLayer t… and 40 more (compare)

  • Nov 24 12:04

    maltanar on pool_flex

    (compare)

  • Nov 24 12:04

    maltanar on dev

    [Deps] update hlslib [Pool] adjust kernel param as t… [Pool] change Pool_Batch to use… and 6 more (compare)

  • Nov 24 12:04
    maltanar closed #451
  • Nov 24 11:44
    maltanar synchronize #451
  • Nov 24 11:44

    maltanar on pool_flex

    [Deps] update hlslib (compare)

  • Nov 24 11:43
    maltanar edited #451
  • Nov 24 11:42
    maltanar edited #451
  • Nov 24 11:41
    maltanar opened #451
  • Nov 24 10:41

    maltanar on pool_flex

    [Pool] more steps towards non-s… [Test] enable 1D tests + restru… (compare)

  • Nov 23 15:29

    maltanar on pool_flex

    [ConvertToHLS] round of fixes t… (compare)

  • Nov 23 13:40

    maltanar on pool_flex

    [Test] support creating 1d maxp… (compare)

  • Nov 23 13:26

    maltanar on pool_flex

    [Pool] change Pool_Batch to use… (compare)

  • Nov 23 10:01

    maltanar on pool_flex

    [Deps] update hlslib [Pool] adjust kernel param as t… (compare)

  • Nov 20 16:08

    maltanar on concat

    (compare)

  • Nov 20 16:08

    maltanar on dev

    [Concat] first sketch of HLSCus… [Concat] fixes to concat op, ad… [Concat] add InferConcatLayer t… and 28 more (compare)

ChristiaanBoe
@ChristiaanBoe
This might be an obvious question but the Finn pynq version for the ultra96 should be v.2.6 right?
2 replies
Yaman Umuroglu
@maltanar
Heads-up to anyone using the dev branch of FINN: in preparation for newer Vitis versions, the environment variables for specifying the Xilinx tool installation has changed as a result of PR #367 . Instead of the old VIVADO_PATH and VITIS_PATH you must now specify FINN_XILINX_PATH (e.g. /opt/Xilinx) and FINN_XILINX_VERSION (e.g. 2020.1). Please see the latest Getting Started instructions on https://finn-dev.readthedocs.io/en/latest/getting_started.html#
ChristiaanBoe
@ChristiaanBoe
Is there a file somewhere detailing all the supported layers in FINN?
2 replies
LeatherE
@LeatherE
Hi ! I'm recently trying to modify finn-hlslib/mvau.hpp and test by finn-hlslib/tb/test_conv3.tcl. But I found some strange things that after I only comment out #pragma HLS UNROLL in finn-hlslib/mvau.hpp which I thought will increase the total simulation time, the Vivado_HLS RTL simulation time did not have any different compare with the original source code. So, i want to ask that if i need to modify any code in finn-hlslib/ should I do any additional modification in order to let it observe by the report of Vivado_HLS?
2 replies
miziworld
@miziworld
hello, I need help for everyone. I downloaded the finn-examples on ZCU104 board and ran the examples. Now, I'm going to train and convert new models by using finn project. Actually It's too difficult to understand all flows of finn , so I have some questions. To getting start, 1. select the model(for example resnet), 2. training the model with the brevitas pytorch tool and convert it to onnx model. 3. export it to the finn manager. Do I correctly understand the flow? Thank you for everyone's help.
2 replies
I have one more question,,, for using finn-hlslib in the github for the tutorial, Is this right that just attaching all the code and synthesising them in the vivado hls tool..??
1 reply
Peter Lehnhardt
@pete-lennart

Hi, I am a little bit stuck and hopefully, someone can give me a little help.

What I am doing: I am implementing a custom fully-connected network for inference. Input is a feature vector with 31 float features. Every value is shifted by mean and scaled by standard deviation. The output should be a one-hot vector indicating one of three classes. The network uses 8-bits for weights.

My problem: My network uses input quantization with a QuantIdentity operation at the beginning, which translates to a MultiThreshold operation at the beginning of the ONNX graph. When I manually do the FINN flow without any manual data type assignments to the input tensor this MultiThreshold is not translated to a Thresholding_Batch by the InferThresholdingLayer transformation because MultiThreshold nodes without integer inputs are skipped (finn/src/finn/transformation/fpgadataflow/convert_to_hls_layers.py:888). Otherwise, if I assign a data type manually to the input tensor like it is often done in various tutorials (model.set_tensor_datatype(model.graph.input[0].name, DataType.INT8)) the MultiThreshold node is successfully translated to a Thresholding_Batch node but I get several errors indicating that I can not use floating-point feature vectors for integer inputs when I want to actually run the driver on my PYNQ-Z2 board.

I also don't find anything about purely floating-point inputs in any code on Github since most code is done on mnist or cifar which use integer image data. I would really appreciate it if someone could tell me how to use floating-point inputs.

2 replies
ChristiaanBoe
@ChristiaanBoe

Hello everyone, I have been making progress on creating a R-CNN network following the Cybersecurity example in the notebooks. The model I have created is basicly a bad yolo clone using 8bit long width for both weights aswell as activation function, this seems to work well enough. The input to the model is [*, 3, 448,448] with the input being a float ranging from 0 to 1. How do I fix my model so that it will take [1,3,448,448] int8 ranging from 0 to 255 for my Finn model. Do I simply add a QuantIdentity to the input or should I retrain it with a QuantIdentity layer already added? Is the dynamic batchsize also a problem I need to fix or will the conversion to ONNX automaticaly fix this problem?

https://colab.research.google.com/drive/1jrsTywghKYlDgKe2TSnJelzlth7Tsmc6 I have added a notebook if you wish to play around with my code

thanks in advance

2 replies
Frank Hogervorst
@FrankHogervorst_gitlab
I seem to have a similar problem as Qiangpu Chen@qpchen on Jun 24 10:13, but I can't view the replies. Is there any way I can get those.
4 replies
LeatherE
@LeatherE
Hi everyone! I'm still trying to modify finn-hlslib/mvau.hpp and test by finn-hlslib/tb/test_conv3.tcl. Recently, i found that if I comment out all #pragma HLS UNROLL in finn-hlslib/mvau.hpp the latency will remain the same, and only if i also comment out #pragma HLS PIPELINE II=1 in line 123, the latency will started to be affected by #pragma HLS UNROLL. So, in by my observation, the overall speedup was completely depend on #pragma HLS PIPELINE II=1 in line 123, and all the #pragma HLS UNROLL were just ineffective in this code. I wonder that this phenomenon is normal or not?
3 replies
shashwat1198
@shashwat1198

Hello I am tyring to implement a conv2D model on the zcu104 using FiNN....This is how I describe the model conv2Dmodel = nn.Sequential(
QuantConv2d(in_channels = 11,out_channels=40, kernel_size=3,weight_bit_width = 8),
nn.BatchNorm2d(40),
nn.Dropout(0.2),
QuantReLU(),
QuantConv2d(40, 80, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(80),
nn.Dropout(0.4),
QuantReLU(),
QuantConv2d(80, 120, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(120),
nn.Dropout(0.2),
QuantReLU(),
QuantConv2d(120, 160, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(160),
nn.Dropout(0.5),
QuantReLU(),
QuantConv2d(160, 200, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(200),
nn.Dropout(0.2),
QuantReLU(),
QuantConv2d(200, 240, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(240),
nn.Dropout(0.4),
QuantReLU(),
QuantConv2d(240, 256, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(256),
nn.Dropout(0.2),
QuantReLU(),
QuantConv2d(256, 512, kernel_size=3,padding=(1,1),weight_bit_width = 4),
nn.BatchNorm2d(512),
nn.Dropout(0.5),
QuantReLU(),
nn.Flatten(),
QuantLinear(2048,2,bias=True,weight_bit_width = 4),
nn.Softmax()
)

conv2Dmodel = conv2Dmodel.float()
criterion = nn.BCELoss()
optimizer = optim.SGD(conv2Dmodel.parameters(), lr=0.001) The LUT resource utilization of the model turns out to be very high in the estimates...more than what is available on the zcu104....Am I describing the model right?...Can I do something more because in the finn examples mobile-net v1 is implemented on the zcu104....and this model is smaller than that!....Any suggestion would be very helpful!

3 replies
Jesús Omar Lacruz
@jesusomarlacruz_twitter

Hi guys, I managed to run the end2end notebook example (cibersecurity) and now I want to instantite the generated hls ip in vivado to validate it with modelsim and in a FPGA board. When I open the finn_vivado_stitch_proj I found that the input is 40 bits wide but in the design, the NN input layer is 600 bits wide (after padding).

I am wonder if I lost any step in the middle where this datapath width change is explained??

Thanks in advance for your help

5 replies
Frank Hogervorst
@FrankHogervorst_gitlab

Hello, I've got a question regarding the integration of stitched IP. The method I've succesfully used in the past for the KU060 was the following:

  1. Add 'finn_design.v', 'finn_design_wrapper.v', and all folders with 'finn_design' to project.
  2. Add the memstream folder and all code_gen_ipgen folders to the IP repositories (adding only the .xcix to sources isn't sufficient).
  3. Stitch it together with the rest of my design and generate the bitstream.

I was wondering whether this is the right approach as it seems a bit complex, is there a way to get rid of all the dependencies in the /tmp/ folder and package it all up into one single container .xcix file?

6 replies
Yaman Umuroglu
@maltanar
Hi all! You may have heard that we're hosting a competition (with prizes!) using Brevitas to come up with efficient networks for radio signal modulation classification (https://bit.ly/brevitas-radioml-challenge-21). This week, we're hosting a live session where you can ask questions on Thursday, register on https://itu.zoom.us/meeting/register/tJ0rf--gqTsrEt0M6Aqzsr7ufS5mG2EOXmCs -- we'll also share some tips around the training of efficient networks.
brsylmz23
@brsylmz23
Hello, I use Ubuntu 18.04 VM and when I tried to connect my PYNQ-Z2 device, I can't ping it from Ubuntu. But I can ping my device from my Windows. Can someone help what steps do I need to follow to ping my pynq device via Ubuntu VM? Thanks in advance.
1 reply
Sandro Magalhães
@samagalhaes

Hello everyone,

I am trying to convert the SDD-BiDet network for the FPGA. I changed the binary layers with the compatible one from brevitas library.
FINNManager starts the conversion correctly, but it crashes in the middle of the conversion. The ONNX exporter is complaining about the number of arguments. Do you have any idea about how to solve this.

/opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py in _run_symbolic_function(g, n, inputs, env, operator_export_type)
    932                     return None
    933                 attrs = {k: n[k] for k in n.attributeNames()}
--> 934                 return symbolic_fn(g, *inputs, **attrs)
    935 
    936         elif ns == "prim":

/opt/conda/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py in wrapper(g, *args, **kwargs)
    125         def wrapper(g, *args, **kwargs):
    126             # some args may be optional, so the length may be smaller
--> 127             assert len(arg_descriptors) >= len(args)
    128             args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
    129             # only support _outputs in kwargs

Thanks

KrishnaMuvva
@KrishnaMuvva
Hello all,
Hello all, I am having troubles executing the FINN examples provided by Xilinx. I am trying to execute 'Train mlp with brevitas tutorial' on FINN (https://github.com/Xilinx/finn/tree/master/notebooks/end2end_example/cybersecurity). On the compilation process its generating the error saying that 'vivado_hls' not found. However the viviad_hls file exists in the path directory. Help will be appreciated.
1 reply
Matteo Vit
@matteovit

Hi all,
I am trying to implement an image detection NN using FINN.
The network is fairly standard: feature extraction with CNN and 2 layers of fully connected. I have trained the network with Brevitas and I have adapted the end to end example to read the network I have trained.
I have just removed the transformations about binarized layers (present in the original example) and updated the folding code.

I get this error message:

AssertionError: Must have 1 AXI lite interface on IODMA nodes

I am wondering if I need more/different transformations.

I have uploaded the code here:

https://github.com/starwaredesign/finn-scratch

Is there some logging info that can help to debug the issue?
Documentation about the various transformations and which is required for various type of networks?

Thanks!

tsaijohnson
@tsaijohnson

Hi all
I am trying to implement an KWS model using FINN.
When I am in the step 3 "Partitioning, Conversion to HLS Layers and Folding",
the error occurred in "model = model.transform(to_hls.InferStreamingMaxPool())". (as seen in the photo below)

AssertionError                            Traceback (most recent call last)
<ipython-input-9-0e930facb572> in <module>
      9 model = model.transform(absorb.AbsorbConsecutiveTransposes())
     10 model = model.transform(InferDataLayouts())
---> 11 model = model.transform(to_hls.InferStreamingMaxPool())
     12 
     13 model.save(build_dir+"/ckpt.t7.M5_11111.pth.finn_test1.onnx")

/workspace/finn-base/src/finn/core/modelwrapper.py in transform(self, transformation, make_deepcopy, cleanup, fix_float64)
    137         model_was_changed = True
    138         while model_was_changed:
--> 139             (transformed_model, model_was_changed) = transformation.apply(
    140                 transformed_model
    141             )

/workspace/finn/src/finn/transformation/fpgadataflow/convert_to_hls_layers.py in apply(self, model)
    265                     graph_modified = True
    266         if graph_modified:
--> 267             model = model.transform(InferShapes())
    268             model = model.transform(InferDataTypes())
    269         return (model, graph_modified)

/workspace/finn-base/src/finn/core/modelwrapper.py in transform(self, transformation, make_deepcopy, cleanup, fix_float64)
    137         model_was_changed = True
    138         while model_was_changed:
--> 139             (transformed_model, model_was_changed) = transformation.apply(
    140                 transformed_model
    141             )

/workspace/finn-base/src/finn/transformation/infer_shapes.py in apply(self, model)
     84     def apply(self, model):
     85         # hide your riches!
---> 86         hidden_ops = _hide_finn_ops(model)
     87         # call regular ONNX shape inference
     88         model = ModelWrapper(si.infer_shapes(model.model))

/workspace/finn-base/src/finn/transformation/infer_shapes.py in _hide_finn_ops(model)
     58         node_ind += 1
     59         if is_finn_op(node.domain):
---> 60             new_node = _make_shape_compatible_op(node, model)
     61             hidden_ops[str(new_node)] = node
     62             model.graph.node.insert(node_ind, new_node)

/workspace/finn-base/src/finn/transformation/infer_shapes.py in _make_shape_compatible_op(node, model)
     43         # lookup op_type in registry of CustomOps
     44         inst = registry.getCustomOp(node)
---> 45         return inst.make_shape_compatible_op(model)
     46     except KeyError:
     47         # exception if op_type is not supported

/workspace/finn/src/finn/custom_op/fpgadataflow/streamingmaxpool_batch.py in make_shape_compatible_op(self, model)
    117         ishape = tuple(model.get_tensor_shape(self.onnx_node.input[0]))
    118         warnings.warn("%s, %s, %s" %(str(ishape), str(exp_ishape), str(oshape)))
--> 119         assert ishape == exp_ishape, "Unexpect input shape for StreamingMaxPool."
    120         # implement tensor with correct shape
    121         values = np.random.randn(*oshape).astype(np.float32)

AssertionError: Unexpect input shape for StreamingMaxPool.

I thought the reasons why the error occurred is because our max pool size is 1 x 4 not the same as cnv example 2 x 2.
Is there any possible to deal with this problem?
Tanks!

image.png
hehehe-449
@hehehe-449_gitlab
Hi,
I want to know if LeNet5 network can work on PYNQ Z2 board by using FINN?
And which networks can work on PYNQ Z2 board by using FINN now?
brsylmz23
@brsylmz23

Hello all,

I got fail when I try to complete tfc_end2end_example. It's about ssh but I could not really understand how I can resolve it. Could anybody knows whats the problem and how I can fix it?

image.png
ChristiaanBoe
@ChristiaanBoe

Hello everyone,

I get vastly different outputs whenever I try to simulate the golden input of my model using the finn.core.onnx_exec.execute_onnxmethod , I I have used the following layers for my model

from brevitas.core.quant import QuantType
from brevitas.nn import QuantLinear, QuantReLU, QuantConv2d, QuantDropout, QuantIdentity
from brevitas.quant.scaled_int import Uint8ActPerTensorFloat
from brevitas.quant import Int8Bias as BiasQuant


class YoloV1(nn.Module):
  def __init__(self,split_size, num_boxes, num_classes, act_bit_width, weight_bit, in_channels=3):
        super(YoloV1, self).__init__()
        self.in_channels = in_channels
        self.net = nn.Sequential(
        QuantIdentity(act_quant=Uint8ActPerTensorFloat, return_quant_tensor=True),
        QuantConv2d(in_channels, 32, 3, stride=1, padding=1,bias=True,  weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        nn.MaxPool2d(kernel_size=2, stride=2),
        QuantConv2d(32, 64, 3, stride=1, padding=1,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        nn.MaxPool2d(kernel_size=2, stride=2),
        QuantConv2d(64, 128, 3, stride=1, padding=1,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        nn.MaxPool2d(kernel_size=2, stride=2),
        QuantConv2d(128, 256, 3, stride=1, padding=1,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        nn.MaxPool2d(kernel_size=2, stride=2),
        QuantConv2d(256, 512, 3, stride=1, padding=1,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        nn.MaxPool2d(kernel_size=2, stride=2),
        QuantConv2d(512, 1024, 3, stride=1, padding=0,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        QuantConv2d(1024, 512, 3, stride=1, padding=1,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        QuantConv2d(512, 40, 1, stride=1, padding=1,bias=True, weight_bit_width=weight_bit, return_quant_tensor=True, bias_quant=BiasQuant),       
        QuantReLU(bit_width=act_bit_width,return_quant_tensor=True),
        nn.Flatten(), 
        QuantLinear(40 * split_size * split_size, 496,  bias=True,  weight_bit_width=2,  return_quant_tensor=True, bias_quant=BiasQuant),
        QuantDropout(0.0,  return_quant_tensor=True),
        QuantReLU(bit_width= act_bit_width,   return_quant_tensor=True),
        QuantLinear(496, split_size * split_size * (num_classes + num_boxes * 5), bias=True, weight_bit_width=weight_bit,  bias_quant=BiasQuant))


  def forward(self, x):
       return self.net(x)*10**2

At first glance are there any layers that are not supported in Finn that I used that can cuase this missmatch?

If you wish to recreate this problem I have attached all necessary files in the following link:
https://drive.google.com/drive/folders/11JSKhgBZKAx6qPYpuOhHthhsP_h3sPly?usp=sharing

Any help would be most appreciated,

Christiaan

wangtianxing1991
@wangtianxing1991
hello, I read the document "memory efficient dataflow influence for deep CNN on FPGA", which mentioned a method to reduce the use of Bram by increasing the Bram clock frequency, which is called "frequency compressed memory packaging". Does Finn use the "frequency compressed memory packaging" method in the current version? Thanks!
btwbtw01
@btwbtw01
Can I add hardware IP into FINN by using Vivado HLS? My understanding of Vivado HLS is writing C code and synthesis, then combine Vivado PS block to generate bistream. The tutorial seems like using python to generate custom IP, and there is limit for generating custom IP. Am I right? If I generate bistream, how could I deploy on FPGA by PYNQ
jingkaih
@jingkaih
Hi all, I'm playing around with FINN HLS code and running into some difficulties when trying to synthesize con_stream_top.cpp with input precision set to 1
This is what I got. Hope someone can fix this bug. Thanks in advance
INFO: [HLS 200-10] Analyzing design file 'conv_stream_top.cpp' ...
ERROR: [HLS 200-70] Compilation errors found: In file included from conv_stream_top.cpp:1:
In file included from conv_stream_top.cpp:45:
In file included from /home/centos/finn-hlslib/bnn-library.h:64:
In file included from /home/centos/finn-hlslib/convlayer.h:56:
In file included from /home/centos/finn-hlslib/mvau.hpp:56:
/home/centos/finn-hlslib/mac.hpp:169:9: error: no matching function for call to 'mul'
 res += mul(c[i], d(i,mmv), r);
        ^~~
/home/centos/finn-hlslib/mvau.hpp:291:21: note: in instantiation of function template specialization 'mac<2, ap_uint<16>, std::array<ap_uint<1>, 2>, Slice<ap_uint<1>, 1>::Container<ap_uint<2> >, ap_resource_dflt>' requested here
      accu[0][pe] = mac<SIMD>(accu[0][pe], wgt, act, r, 0);
                    ^
conv_stream_top.cpp:135:5: note: in instantiation of function template specialization 'Matrix_Vector_Activate_Stream_Batch<36, 16, 2, 2, Slice<ap_uint<1>, 1>, Slice<ap_int<1>, 1>, Identity, ap_uint<1>, ap_uint<2>, ap_uint<2>, PassThroughActivation<ap_uint<16> >, ap_resource_dflt>' requested here
    Matrix_Vector_Activate_Stream_Batch<MatrixW, MatrixH, 2, 2, Slice<ap_uint<1> >, Slice<ap_int<1> >, Identity, ap_uint<1> >
    ^
/home/centos/finn-hlslib/mac.hpp:88:6: note: candidate template ignored: substitution failure [with TC = ap_uint<1>, TD = ap_uint<1>]
auto mul(TC const &c, TD const &d, ap_resource_dflt const&) -> decltype(c*d) {
     ^
/home/centos/finn-hlslib/mac.hpp:113:6: note: candidate template ignored: substitution failure [with TC = ap_uint<1>, TD = ap_uint<1>]
auto mul(TC const &c, TD const &d, ap_resource_lut const&) -> decltype(c*d) {
     ^
/home/centos/finn-hlslib/mac.hpp:139:6: note: candidate template ignored: substitution failure [with TC = ap_uint<1>, TD = ap_uint<1>]
auto mul(TC const &c, TD const &d, ap_resource_dsp const&) -> decltype(c*d) {
     ^
1 error generated.
Failed during preprocessing.
    while executing
"source /home/centos/finn-hlslib/tb/hls-syn-conv-stream/sol1/csynth.tcl"
    invoked from within
"hls::main /home/centos/finn-hlslib/tb/hls-syn-conv-stream/sol1/csynth.tcl"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 hls::main {*}$args"
    (procedure "hls_proc" line 5)
    invoked from within
"hls_proc $argv"
Finished C synthesis.
Mohamed Moursi
@MohamedA95
Hello everyone, where can I find some examples for finn-hlslib? I found examples for FINN itself but, what if want to use finn-hlslib directly?
jjc
@lloo099
1632396637(1).png
1632396637(1).png
1632396679.png
1632396679.png
Yaman Umuroglu
@maltanar
Dear all, the FINN community has grown considerably in size and we're getting lots of support requests. To support you better and make it easier to find answers, we'll be switching to GitHub Discussions for both finn and finn-hlslib:
If you have asked a question here on Gitter that has gone unanswered, I would kindly ask you to copy over your question to GitHub Discussions instead. Please also try to help others if you see a question that you may be able to answer, as the FINN team at Xilinx has limited bandwidth to offer support. Thanks for being part of the FINN community!
tobs95
@tobs95
Hello! Can I implement my own network with the Finn Compiler, without putting additional work into the HLS synthesis? And is there any tutorial out there from scratch? Thanks you!
liyue2ppy
@liyue2ppy

The problem about "No module named 'finn.util.visualization".
When I try to use showInNetron the error occurs as following:
'''

from finn.util.visualization import showSrc, showInNetron
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'finn.util.visualization'

'''

Other things, including finn and finn.util, are with no problem. My finn is with tag 0.5b. How can I solve this problem?

satishkumar538
@satishkumar538
Satish_Finn.png
I am working a model having 1D Convolution layer with In_Channel=2, Out_Channel=16, Kernel=8. While build I am getting following Error (See Screenshot). Please suggest.
2 replies
satishkumar538
@satishkumar538
1.PNG
2.PNG
While working on the radioml onnx model provided on https://github.com/Xilinx/finn-examples/releases/download/radioml/radioml_w4a4_small_tidy.onnx I am facing following error (See Screenshot). Please suggest.
satishkumar538
@satishkumar538
Satish_Finn1.png
satishkumar538
@satishkumar538
"the ONNX model has been tidied up by removing the input quantization (See the last paragraph of attached screenshot of readme file provided with finn-example of vgg10-radionl)" How I can perform this removal of input quantization on the onnx file of the sandbox repository. Please suggest steps/ codes.
lihengyi-ai
@lihengyi-ai
image.png
lihengyi-ai
@lihengyi-ai
Is there any suggestion for the problem like this while installing "pip3 install finn-examples"?
lumawu
@lumawu

Hey there, I am currently trying to go through the end2end_cnv toolflow with a network I modified on my own using brevitas.
However, during partitioning I run into the following error while creating a dataflow partition:
https://imgur.com/l7SlGcR

I used the example network on the pytorch tutorials as basis:
https://imgur.com/NVYgA4u

Here are the modifications I made to it, training was done with a bit width of 1 for weights and activation and an input width of 8:
https://imgur.com/s3omu7C

the onnx model has the following topology right before the partitioning call:
https://imgur.com/AuRjJR6

Brevitas is on v0.6, Finn on v0.7, pytorch on v1.7.1

This is my first time using my own network and I couldn't find much info on the error on the net, so I hope I can find help here

Felix Jentzsch
@fpjentzsch
Please note that we switched over to GitHub Discussions and do not actively use this channel anymore.
https://github.com/Xilinx/finn/discussions
David Cain
@monkeyboyfr3sh:matrix.org
[m]
Maybe I missed something simple, how to export resource utilization?