Posts

Showing posts from 2012

Kotlin Language Features Related to Null Handling

Any software engineer with a Java background would find the null handling features in the Kotlin language interesting. Let's summarize this topic with some examples. Nullable types: In Kotlin, types are non-nullable by default. If you want a variable to be able to hold a null value, you need to explicitly declare its type as nullable using the Type? syntax. For example, String? denotes a nullable string, while String represents a non-nullable string. Safe calls (?.): Kotlin introduces the safe call operator (?.) for handling nullable types. It allows you to safely invoke a method or access a property on a nullable object. If the object is null, the expression returns null instead of throwing a NullPointerException. Example: data class Person(val name: String, val age: Int, val address: String?) fun main() {     // Create a person with a nullable address     val person1 = Person("John Doe", 25, "123 Main Street")     val person2 = Person("Jane Doe", 30,

Real life TestDrivenDevelopment benefit: Tests as Documentation

I ve been working on an integration component that listens to LDAP server and notifies applications about changes on LDAP entries. My component searches for LDAP change logs. And a change log has the "targetdn" attribute. Example: targetDn: uid=ND2392,ou=Users,dc=MyCompany There is a business rule about notification process: If Organization Unit is “Special Users”, skip the notification for that change. Example: targetDn: uid=ND2392,ou=Special Users,dc=MyCompany   This changeLog should be skipped because it is about "Special Users" organization unit. I am using a regular expression to parse the targetdn. I isolated the code that does parsing and wrote unit tests for many inputs. Of course I added a unit test for the Business Rule mentioned above. At a point, I thought my regular expression is not good enough and changed it: Old regex: [oO][uU]=[^,]* New regex: [oO][uU]=[^,\s]* I was getting prepared to commit my cod

Fluent Interface Example in an Enterprise Integration Project

While working on an enterprise integration project about virtual bank payments, I utilized a simple fluent interface approach. The purpose of the code is creating strongly typed objects from a generic SMO (ServiceMessageObject) instance. Note: In the context of IBM integration technologies, ServiceMessageObject represents the “message”. In our example, it represents a Web Service request. Old Code: if (paymentInfo.getDataObject("totalAmount") != null) { totalAmount = paymentInfo.getDataObject("totalAmount"); DataObject totalAmount2 = totalAmount.getDataObject("totalAmount"); if (totalAmount2.getBigDecimal("amount") != null) { parameters.setAmount(totalAmount2.getBigDecimal("amount")); } String currencyCode = totalAmount2.getString("currencyCode"); parameters.setCurrencyCode(currencyCode); } Byte numberOfInstallments = paymentInfo.getByte("numberOfInstallments"); if (numberOfInstallments !

Interpreter for a Subset of Java Language (J-)

In this post we will have a look at the interpreter part of my project. Before interpreting the source code, following things happened: • Lexer built the tokens from source code. • AST (Abstract Syntax Tree) was created. • Parser traversed the AST and populated our Symbol Table Interpreter should once again traverse the AST and execute the statements. private Object exec(CommonTree statement) { switch (statement.getType()) { case ProgramWalker.BLOCK: case ProgramWalker.MAINMETHOD: block(statement); break; case ProgramWalker.PRINT: print(statement); sendSourceLineMessage(statement); break; case ProgramWalker.VARDECL: currentSpace.put(statement.getChild(1).getText(), null); sendSourceLineMessage(statement); case ProgramWalker.METHOD: break; case ProgramWalker.INT: ret

A Code Analyzer for a Subset of Java Language (J-) Using ANTLR - PART 2

As we know, semantic analysis of a compiler includes checking the types of expressions. To do that, we have to maintain a Symbol Table which maps variable names to their types and locations. There maybe many variables with the same name which are in different scopes. This is legal in most languages and we have to handle this situation. We have to know when following scopes are entered in: - class scope - method scope - no inner block scopes (my language does not support this currently) We will do type-checking in two phases. First we'll populate the Symbol Table and it will contain the symbols with their scopes. Then, we'll check the types of symbols that are subject to addition operation by querying the symbol table. ANTLR allows us to inject code in different places of the generated parser. We need to know when a class/method scope is entered and the end of any scope is reached. Below is some grammar code from the project : classDecl @init {visitor.beforeClass(); System.out.p

A Code Analyzer for a Subset of Java Language (J-) Using ANTLR - PART 1

Our goal is to write a code analyzer for a language that is called "J minus". Source code of the whole project can be found here: https://code.google.com/p/j-minus/ Lets start with the language definition: Lexical Rules: Identifiers: Sequence of letters. Integer literals: A sequence of decimal digits. Binary operators: && < + - * Grammar Rules: Program -> MainClass ClassDecl* MainClass -> class id { public static void main (String [] id) { Statement* }} ClassDecl -> class id { VarDecl* MethodDecl* } VarDecl -> Type id ; MethodDecl -> public Type id ( ( formalParameter (, formalParameter)* )? ) { varDecl* statement* return exp; } FormalParameter -> Type id Statement -> id = Exp; Exp -> AdditionExp | SimpleExp SimpleExp -> id | int AdditionExp -> SimpleExp + SimpleExp Type: 'int' | 'boolean' ; We must pass following steps to reach our goal: 1) Implement the lexer and parser for the language. (use ANTLR) 2) Bu

"Parsing" Konusu ile ilgili Temel Kavramlar

--Halen yazım aşamasında-- Regular expressions yardımıyla bir programın "lexical" yapısını ifade edebildiğimizi görmüştük. Aynı şeyi dildeki cümleler için yapmak mümkün mü? Dildeki cümlelerin yapısını ifade etmek için "context free grammar" kavramı kullanılmakta. Dildeki karmaşık yapıları oluşturmak için "production" yaparak ilerlemek gerekiyor. Aşağıda 5 tane production ile ifade edilen bir dili inceleyelim: S -> S ; S S -> ID = E E -> E + E E -> ID E -> NUM Bu kuralları kullanarak bir cümle üretmeye çalışalım: S ; S ID = E ; ID = E ID = E + E; ID = E ID = ID + NUM; ID = NUM Burada ürettiğimiz cümlenin kaynak kodu (lexer ele almadan önce) şöyle olabilirdi: var = i + 4; j = 5 Burada tokenlerin değerleri: var, i, j, 4, 5 Token türleri: ID, NUM, "+", ";", "=" Gramer ifadesinde sağ tarafta bulunamayan sonlanmış semboller dikkati çekiyor. Bunlara "terminal symbol" deniliyor. Terminal semboller, token tür

Popular posts from this blog

Trie Data Structure and Finding Patterns in a Collection of Words

swapLexOrder: Finding lexicographically largest string

A Graph Application in Java: Using WordNet to Find Outcast Words