Warning: session_start(): Cannot send session cookie - headers already sent by (output started at /home2/aefody/public_html/specman-verification.com/scripts/sb_utility.php:873) in /home2/aefody/public_html/specman-verification.com/index.php on line 11

Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /home2/aefody/public_html/specman-verification.com/scripts/sb_utility.php:873) in /home2/aefody/public_html/specman-verification.com/index.php on line 11

Warning: strpos(): needle is not a string or an integer in /home2/aefody/public_html/specman-verification.com/index.php on line 35

Warning: strpos(): needle is not a string or an integer in /home2/aefody/public_html/specman-verification.com/index.php on line 47

Warning: strpos(): needle is not a string or an integer in /home2/aefody/public_html/specman-verification.com/index.php on line 59

Warning: strpos(): needle is not a string or an integer in /home2/aefody/public_html/specman-verification.com/index.php on line 71
Tips for HVL and HDL users with special emphasis on Specman-e, SystemVerilog and Questa
Specman Verification
Verification engineer? Verifigen has the right project for you!
Specman profiler for beginners 
Thursday, November 12, 2009, 03:31 AM - Hardcore verification
We had a small discussion: Pini, my great office-mate claimed that his way of writing e code was more efficient then mine, I had a feeling mine was as good as his, but wouldn't have been ready to place a bet for anything worth more then 5$ to back this up. We had an hour or two on our hands, and we thought that instead of fighting, we could settle our differences in a civilized manner by way an experiment which would also help us freshen up our knowledge on running Specman profiler.

To make our experiment reliable we carried it out with both simulator and Specman in batch mode. The time consumed by GUI related activities such as refreshing screen images simply adds noise to the profiler results, which are not the easiest thing to figure out anyway. In our case, we had a super-simple code that did not include any printouts to the screen or to files, but if you're running the profiler with a normal testbench, you might want to disable as much of these as you can. If you've done things the eRM way, and have used only message/messagef and no out/outf/print, this should be a peace of cake, at least on the testbench side.

The following zip file holds mine and Pini's code (every_clock.e, every_10_clocks.e correspondingly), a small DUT, and the simple magic commands required to compile everything and run the profiler in a Cadence IES environment. It also holds another piece of code (optimized.e) and its corresponding profiler report, which, as you can easily tell from its name, is the optimized version that I created based on the profiler reports for the other two versions. For those of you who detest zip, the competing code snippets and the optimized one can also be found below, and the profiler reports can be viewed here(Pini), here(Avidan) and here(optimized), but try not to look at the optimized version until you've reached the end...

A quick look at the results for my and Pini's code, will give you the impression that our contest ended up more or less in a tie - the difference in the percentage of time consumed by Specman between the two version is only 1.3%, 84.8% in my version compared to 86.1% in Pini's. However, the quick look is misleading in this case, and these 2% are more significant then they would seem. To understand why, think about a case where version A consumes 99% of simulation time, and version B 98%. Assuming the simulator code used with both A and B is identical, the 1% with A reflects the same CPU usage as the 2% with B, which actually means that version B's e code is using 50% less CPU then version A's e code. Since your e code in this case is taking the vast majority of simulation time, it means that B will run almost twice as fast as A! Therefore, if you have two results you want to compare, you might want to use the following formula to calculate the actual performance improvement (this is in fact a very close approximation of the real difference, but we're not doing our A level in Mathematics here):

"Version B will take ((version A's simulator time)/(version B's simulator time))*100 of the time version A takes".

and in our case:

Avidan's version will take 13.86/15.18*100=91% of the time Pini's Version takes.

And this looks already more significant...

I must admit that I'm not an expert figuring out what goes on in the profiler report: Basically, I work my way through it fairly stupidly, looking for the most wasteful parts of my code and trying to recode them, but not always knowing to pinpoint the exact problem, if my recoding will improve it or make it worse, or if there was a problem at all to begin with, or what I see is just a normal behavior. Looking at the reports for my and Pini's code, I could see that his "temporals" section, takes almost as much time as my own "temporals" and "user methods" sections put together. This makes sense because what Pini chose to implement in a temporal expression (i.e. wait till the cnt signal is 10), I have implemented inside the TCM. Basically, our argument was over the question if it is better to make the TCM or the event more complex, and the results seem to indicate that both options are very much equivalent. However, in Pini's report there's an anigmatic item called "scheduling" that takes more then in my report, and probably makes the difference...What can it point to?

I gave this question some thought and the best guess I could come up with was that the difference is probably due to the fact that Pini's code has two parallel processes (the temporal expression and the TCM), while my code has only one (the TCM). I assumed that "scheduling" probably referred to switching between the threads, and since Pini has 2, while I have only 1, his scheduling expenses are higher. To confirm this hypothesis, I replaced the TCM in Pini's code with an "on" construct, thereby removing one of the threads, which resulted in a version that did much better then both mine and Pini's. This was a bit too good, because I was expecting the new version to be more or less equivalent to mine, but I already told you I'm not the biggest profiler expert, right?! If someone can explain the results better, please write me, and I'll put your explanation here, along with the appropriate credit.

BTW, from the pure e perspective, this entry doesn't result in any clear coding guidelines. While "on" seems to be the winner, it has several annoying characteristics that brought it to the verge of extinction in modern day e. To name just two: it can't consume any time, and it can't trigger on an event defined elsewhere. Also, if my assumption is correct and the differences between the versions are just related to the number of threads used in each, then in any reasonable size testbench where you have 10 threads or more, these coding styles should be equivalent.

Thanks to the people who helped me with this entry or corrected mistakes:
Daniel Pörsch
Ronen Ben-Zino


----- Avidan's version - TCM that runs every clock -----




<'

unit env_u {

    clk : in event_port is instance;



    keep bind(clk, external);

    

    cnt : in simple_port of uint (bits:16) is instance;



    keep bind(cnt, external);

        

    !cnt_10s : uint;





    every_clk_tcm() @ clk$ is {

        while(TRUE) {



            if(cnt$ == 10) then {

                cnt_10s+=1;



            };

            wait cycle;

        };

    };

    

    run() is also {



        start every_clk_tcm();

    };

};



extend sys {



    env : env_u is instance;

    keep env.hdl_path() == "~/top";



    keep env.clk.hdl_path() == "clk";



    keep env.cnt.hdl_path() == "cnt";



    

    setup() is also { 

        set_config(run, tick_max, MAX_INT, exit_on, error); 

    }; 



};

    

'>




----- Pini's version - TCM that runs every 10th clock -----




<'

unit env_u {    

    clk : in event_port is instance;



    keep bind(clk, external);

    

    cnt : in simple_port of uint (bits:16) is instance;



    keep bind(cnt, external);

        

    !cnt_10s : uint;



    

    event every_10_clks_e is true(cnt$ == 10) @ clk$;



    every_10_clks_tcm() @ every_10_clks_e is {

        while(TRUE) {



            cnt_10s+=1;

            wait cycle;

        };

    };



        

    run() is also {

        start every_10_clks_tcm();

    };



};



extend sys {

    env : env_u is instance;



    keep env.hdl_path() == "~/top";

    keep env.clk.hdl_path() == "clk";



    keep env.cnt.hdl_path() == "cnt";



    

    setup() is also { 

        set_config(run, tick_max, MAX_INT, exit_on, error); 

    }; 



};

    

'>







----- Optimized version - using "on" -----



<'

unit env_u {    

    clk : in event_port is instance;



    keep bind(clk, external);

    

    cnt : in simple_port of uint (bits:16) is instance;



    keep bind(cnt, external);

        

    !cnt_10s : uint;



    

    event every_10_clks_e is true(cnt$ == 10) @ clk$;



    

    on every_10_clks_e {

        cnt_10s+=1;

    };

};



extend sys {

    env : env_u is instance;

    keep env.hdl_path() == "~/top";



    keep env.clk.hdl_path() == "clk";



    keep env.cnt.hdl_path() == "cnt";



    

    setup() is also { 

        set_config(run, tick_max, MAX_INT, exit_on, error); 

    }; 



};

    

'>







  |  permalink
Greping all loaded e files for a regex 
Friday, October 2, 2009, 05:13 AM - Hardcore verification
Warning: This entry was extensively modified since it was first published about two weeks ago. Many good people, more than I thought were reading this blog, have pointed out mistakes, inaccuracies, and outright lies in the ~20 lines I wrote. After I got one or two lawsuit threats I finally decided to rewrite it ;-)

When originally posted, this entry started with the sentence "For some reason that escapes me, Specman's 'search' command will only look for a regular string in the e files in your testbench, but not for a regular expression". While this sentence is true (i.e. when you type "search" at Specman's command line you can only give a string as a parameter), it is absolutely not true that you can't do a regex search from within Specman itself: Pushing Specman's GUI search button will take you to a search screen where you can look for regular expressions or for regular strings inside type definitions, constraints and what not. The search is preformed, needless to say, in all the currently loaded e modules.

If, like me, you're working for a company that doesn't have an all-you-can-eat deal with Cadence, then probably you don't want to take a license every time you want to search your e files. To search the e modules in your testbench for regular expressions and strings offline, you can use the perl script below to extract a list of e files from your specman.elog file, then use this file list with grep. This will require you to run Specman only once, so that you have a specman.elog file where all the files in your testbench are imported: the stubs creation stage will do for this purpose. Once the file list is created, here are some useful greps you can use it with:

grep "field_name.*\:" `cat modules.txt` #will take you to the field definition (and maybe to one or two other garbage places)
grep "\:.*field_type" `cat modules.txt` #will take you to all the places where a field of a certain type is defined
grep "port_name.*hdl_path" `cat modules.txt` #will take you to the place where a port is tied to its hdl path.
grep "struct[ ]+[a-zA-Z0-9_]+" `cat modules.txt` #will look for all struct definitions

But why write your greps alone if you can get David Robinson to borrow you his? Verilab's great e toolkit, written by David, will look for just about anything you might want in the list of files you feed it with, and print the results nicely formatted to your screen. This will save you the headache of coming up with the correct regular expression for whatever you're looking for. The toolkit will also do some wonderfully useful things such as collecting all the class or enum extensions from your files, or printing the "like" inheritance hierarchy. In short, it will do everything you can do from Specman and more.

Many thanks to Corey Goss from Cadence for his feedback, David Robinson from Verilab for pointing me to the toolkit, Pini Krengel who provided the original version of the script below and Ran Karen who knows everything about Specman, for their help with this post. God, I hope I made everyone happy and won't have to rewrite it again...


#!/usr/bin/perl



open(MODULE_LIST, ">modules.txt") || die $!;



{

  local $/=undef;

  open LOG, "specman1.elog" or die "Couldn't open file: $!";

  binmode LOG;

  $log = ;



  # turn cyclic imports into normal import form

  $log =~ s/\n/ /g;

  $log =~ s/Loading \(([^\)]*)\)/Loading $1/g;

  $log =~ s/ \+ ([^ ]*)/Loading $1/g;

  $log =~ s/Loading/\nLoading/g;



  # match normal imports

  my $normal_import_match_string = '^Loading[ |\n]([^ ]+\\.e)';

  my @evc_files;

  if(@evc_files = $log =~ /$normal_import_match_string/smg) {

    foreach $evc_file (@evc_files) {

      print MODULE_LIST "$evc_file" . "\n";

    }

  }

}



close(LOG);

close(MODULE_LIST);



  |  permalink
Parsing Verilog using Perl (out of boredom) 
Saturday, September 26, 2009, 06:31 AM - Hardcore verification
I've always suspected that boredom, not money, power, sex or anything else, is what makes the world go around and after 3 weeks of unemployment, now finally behind me, I am at last able to confirm this suspicion. In search of something meaningful to do with my time between the beach, the Israeli version of Survivor, and some occasional meetings with potential employers, I decided to pursue an idea I had some time ago, and try to implement it myself. This took me into an unexpected direction of trying to figure out what's the best way of parsing Verilog files. It is hard to say at this stage where my idea will end up, but as a by-product I got some short useful insights and some real code for anyone who intends to use Perl to look into the entrails of a Verilog file...

Starting with google, a simple search of "perl verilog parser" will immediately take you to a package called Verilog::Parser, and another package called v2html. Using these ready-made packages is an easy way to go, but you have to be aware that both of them will hide out a lot of information that you might want to have. Verilog::Parser, for example, breaks a Verilog file into atoms such as keywords, tokens, and numbers and gives you a callback for each. However, if you want to look for something bigger, say, a complete "always" block, you will find yourself re-parsing these atoms into blocks, which is not an easy task. Since my idea required a more high-level point of view, I decided to look for something else.

The next step, therefore, was to dump Verilog::Parser and v2html, and just look at parsing files with Perl. I was assuming that Perl has something generic to offer here, since almost every second programmer, uses Perl exactly for this kind of task. And in fact, Perl does have a very cool and well documented package called Parse::RecDescent, which will parse any file according to any BNF grammar you provide, and enable you to preform custom things for any production it sees on its way. Verilog's BNF can be found here, and in a version that can be easily converted into what Perl's RecDescent would eat, right here. This last link leads to a page under ANTLR's website, and I'll go back to talk about ANTLR in a second.

After adapting the grammar to Perl and installing RecDescent, I was ready to go, and although I still have some difficulties implementing my idea, they're much smaller compared to the ones I had to deal with before...If you think this might be useful for you as well, you can download my Verilog grammer for Perl's RecDescent, and a small example, right here.

As mentioned above I adapted the Perl RecDescent Verilog grammer from a grammer I found on ANTLR's site. This, of course, made me curious about what ANTLR was, and I soon found out it was a very cool free IDE for debugging grammar's and building compilers. For example, it can give you a graphical representation of any grammer you load into it, which can be very helpful when you're trying to figure out how a rule is broken down into individual productions all the way to the individual identifiers. It also has an active user community which had helped me patiently with some stupid questions I had. It is worth trying out...

  |  permalink

Back Next