Securing a web site and web page is the most critical task a developer has to accomplish and get it right. In ASP.NET MVC, we are presented with membership and role providers to achieve the security. We can also use authorization attributes and write our own to lock down controller actions.
Security can be broken down to two aspects. First one is authentication, which is validating you are who you say you are. This is typically done through a username and password against a common repository. Second one is authorization, which is you, as an authenticated user, are allowed/denied certain actions on a given website/webpage.
ASP.NET MVC framework allows custom membership (authentication piece) and role (authorization piece) providers. Although this is fine for most applications, creating/editing roles can require a recompile of the whole application and redeployment. To get around this, you can create another layer, let's call, Context (or Features) if you will, and associate each role with multiple features.
I have done a sample application, and published the CodePlex. You can find the link here
Let me know if/when you check it out and what you think about it. Thanks for reading and happy coding!
Baskin
discussions of software technology, applications and development
Wednesday, December 9, 2009
Wednesday, September 30, 2009
Composition vs. Inheritance
Reusing existing code and avoiding code duplication lands itself to better, scalable and maintainable design and application architecture and is practiced by the Object-Oriented Programming (OOP) community. Inheritance is officially one of the three pillars of OOP, although composition isn't. The difference between the two is, inheritance depicts a "Is-A" relationship where composition is "Has-A" relationship.
As an example, let's take the object "book". I have a special type of book, called "technical book" which derives from the object "book". Certainly a technical book is a special type of book, but it is a book. Composition on the other hand is the having one object containing another object, hopefully related, but not necessarily. Check out the object of "Cover". The "book" has a "cover". Here is a C# code snippet.
As an example, let's take the object "book". I have a special type of book, called "technical book" which derives from the object "book". Certainly a technical book is a special type of book, but it is a book. Composition on the other hand is the having one object containing another object, hopefully related, but not necessarily. Check out the object of "Cover". The "book" has a "cover". Here is a C# code snippet.
1: public class TechnicalBook : Book2: {
3: // every method & property of Book class (protected + public)4: // other methods and properties of TechnicalBook5: }
6:
7: public class TechnicalBook8: {
9: private technicalBookCover = new Cover();10:
11: public string Title()12: {
13: technicalBookCover.Title = "Design Patterns";14: return technicalBookCover.Title;15: }
16: }
17:
18: public class Cover()19: {
20: private string title;21:
22: public string Title { get return title; set title = value; }23: }
Why is this important? Well, if we want to create applications which will be maintained over time and be used and extended in the future, I believe it is critical we make this distinction and model our applications accordingly.
My past experience had been more towards inheritance, but I resented the times that I have painted myself to a corner, since the dependencies created via inheritance becomes a burden to carry forward. Inheritance always exposes every public member in the base class while composition gives you an option to selectively choose, if anyBy using Composition, the TechnicalBook class is insulated against changes of the inner class, Cover. With Inheritance any modifications to the base class Book, the TechnicalBook class is affected. If you want to bind the two classes, you can use an interface as a binding contract and let the two classes implement that interface.In conclusion, Composition requires more work to expose the implementation of inner class objects, but the decoupling achieved by between the current type and the base type can quite valuable. Inheritance still holds true in cases where a clear "Is-A" relationship holds true, but in the long run things may need to be tweaked. And that's when you may want to favor Composition.References:
Thursday, August 20, 2009
Why Change?
Pretty sure this happens to everyone. One day, some higher authority comes along and tells you that what you have been doing is not quite "right" anymore and there is a new way of doing things. Not only that, along the way, while you are in transition, even the new way is changed. What do you do? Do you change your ways of doing things?
This is how I perceive Microsoft's data access technology. Don't get me wrong - it is great to have choice and the future of data access is getting better (okay I am hopeful and I am not even blaming Al Gore on this one). I joined the data access technology evolution in middle 90's. Back then we have ODBC, OLE DB, and ADO. Programming with recordsets in ADO in VB 6 has always been cumbersome, but had my fun of coding. While Not rst.EOF .. Wend anyone?
Then early 2000's, started with .NET 1.0, quickly became 1.1 in 2002. We were introduced to ADO.NET. Between ADO and ADO.NET, there were a lot of changes (besides the addition of .NET). Old way of doing things was not recommended. Now we had datasets and datareader, and data adapters. The relational database is still there (and it will be there for awhile!), but how we shuffle the data between the application tiers was altered. Besides the data access, finally OOP became first class citizen in VB.NET. And that's when I switched to C#. I was getting too confused between VB-6 and VB.NET, but still I did application development in VB.NET. Besides it is really not too hard to program between the languages. It all comes down to Common Language Runtime (CLR. Not a big fan of language wars... just use the right tool for the right job. Maybe Ruby on Rails...
Meantime I delivered and worked with ADO.NET 2.0 projects along with ASP.NET. Things were working out okay, then around 2003, started hearing about eXtreme Programming (XP), writing tests first like Test Driven Development, then came Ajax. Was not quite getting it for awhile. But the best thing to learn a new technology is doing a project with it. Using NUnit framework, did a few applications I realized how easy and fun to write code and test it right there, without launching the debugger, using Asserts and such. And I changed once again and I don't ever want to go back to the old way, especially for enterprise applications. Like Michael Feathers tells you in his book "Working Effectively with Legacy Code", legacy code is code which does not have tests.
In 2005, I attended my first Professional Developers Conference in Los Angeles. The highlight of this conference was the announcement of Linq (stands for Language Integrated Query). Using this colorful, a little terse (at first) syntax, one can get intellisense code from the database. Thought it was very powerful and can't wait to use it. Just as ramping up with this Linq-to-SQL technology (by the way, it only worked with SQL Server), and I was supporting a few Oracle databases (I was using Oracle's .NET library and there is no Linq in that). Thus I did not get ramp up too far ahead.
Meantime around 2007, we heard another technology called ADO.NET Entity Framework. With EF, other databases besides SQL Server can also be accessed using entityclients and object contexts. It's not even done yet. After version 1.0, Microsoft is delivering EF v.4 (what? is that a typo? no, that's MS. they can do that, ver.2 & 3. are skipped) Here we go again...
My point is the change is going to happen, whether you like it or not. If you don't change, you take the risk of being obsolete. Be flexible and learn the new technologies. Master them as much as you can, but when it is time to jump and move on, do not resist and adapt as soon as you can. Software project delivery time-lines are usually set aggressively. How do you factor this in your project delivery timeline. Hmmm.. That may be another blog post.
Again the mantra is sometimes, the old technology is fine and it works, hopefully testable. Now do we want to change it? It works, right?. Is it worth changing it? Maybe. But I know I will be changing, hopefully for the better when it comes to data access.
References:
Working Effectively with Legacy Code
This is how I perceive Microsoft's data access technology. Don't get me wrong - it is great to have choice and the future of data access is getting better (okay I am hopeful and I am not even blaming Al Gore on this one). I joined the data access technology evolution in middle 90's. Back then we have ODBC, OLE DB, and ADO. Programming with recordsets in ADO in VB 6 has always been cumbersome, but had my fun of coding. While Not rst.EOF .. Wend anyone?
Then early 2000's, started with .NET 1.0, quickly became 1.1 in 2002. We were introduced to ADO.NET. Between ADO and ADO.NET, there were a lot of changes (besides the addition of .NET). Old way of doing things was not recommended. Now we had datasets and datareader, and data adapters. The relational database is still there (and it will be there for awhile!), but how we shuffle the data between the application tiers was altered. Besides the data access, finally OOP became first class citizen in VB.NET. And that's when I switched to C#. I was getting too confused between VB-6 and VB.NET, but still I did application development in VB.NET. Besides it is really not too hard to program between the languages. It all comes down to Common Language Runtime (CLR. Not a big fan of language wars... just use the right tool for the right job. Maybe Ruby on Rails...
Meantime I delivered and worked with ADO.NET 2.0 projects along with ASP.NET. Things were working out okay, then around 2003, started hearing about eXtreme Programming (XP), writing tests first like Test Driven Development, then came Ajax. Was not quite getting it for awhile. But the best thing to learn a new technology is doing a project with it. Using NUnit framework, did a few applications I realized how easy and fun to write code and test it right there, without launching the debugger, using Asserts and such. And I changed once again and I don't ever want to go back to the old way, especially for enterprise applications. Like Michael Feathers tells you in his book "Working Effectively with Legacy Code", legacy code is code which does not have tests.
In 2005, I attended my first Professional Developers Conference in Los Angeles. The highlight of this conference was the announcement of Linq (stands for Language Integrated Query). Using this colorful, a little terse (at first) syntax, one can get intellisense code from the database. Thought it was very powerful and can't wait to use it. Just as ramping up with this Linq-to-SQL technology (by the way, it only worked with SQL Server), and I was supporting a few Oracle databases (I was using Oracle's .NET library and there is no Linq in that). Thus I did not get ramp up too far ahead.
Meantime around 2007, we heard another technology called ADO.NET Entity Framework. With EF, other databases besides SQL Server can also be accessed using entityclients and object contexts. It's not even done yet. After version 1.0, Microsoft is delivering EF v.4 (what? is that a typo? no, that's MS. they can do that, ver.2 & 3. are skipped) Here we go again...
My point is the change is going to happen, whether you like it or not. If you don't change, you take the risk of being obsolete. Be flexible and learn the new technologies. Master them as much as you can, but when it is time to jump and move on, do not resist and adapt as soon as you can. Software project delivery time-lines are usually set aggressively. How do you factor this in your project delivery timeline. Hmmm.. That may be another blog post.
Again the mantra is sometimes, the old technology is fine and it works, hopefully testable. Now do we want to change it? It works, right?. Is it worth changing it? Maybe. But I know I will be changing, hopefully for the better when it comes to data access.
References:
Working Effectively with Legacy Code
Monday, July 13, 2009
Working with Offshore
The increased efficiency of the online communications tools, offshore software development has become more prominent in the IT industry. Currently I am working with a bright off-shore development team located eight time zones away. There are some challenges, but overall things are moving well. In this blog, I would like to share my experiences and give a few tips, if I may.
Communication
Obviously not everyone speaks, writes and reads English as their first language. English is also my second language, but living in the States for over fifteen years, I seem to do well and even blog about it. Specifically with software development, getting one's point across is very essential when the team is not co-located. We use a few online, instant messaging and audio/video programs. Earlier we tried ooVoo, however the off-shore team found that it was CPU intensive (with all the ads and everything on the side which I am not too surprised), and they suggested we use Skype. So we switched. It works pretty well and adds to our daily email exchanges. Definitely pick an instant online communication tool which all the team uses
Within the team, our direct point of contact speaks the best English as expected, yet others are good in writing. There was one time that I called one of the developers and could not understand what he was saying, and he probably did not understand what I was saying either because he requested that I speak slowly. And yes, I could speak quite fast at times. Then I quickly switched to instant messaging. This allowed my offshore developer to think more over the text he reads, then respond. I don't plan on calling him directly anymore. Although a conversation is good thing, but it feels odd when the other party is not sure what is being said. Video definitely helps with sending the gestures and such. But in general it is good to keep things in writing.
Get to know your team
Haven't got the chance to meet the team in person, but initially I mentioned that I was going to visit their country and asked and requested for certain information. They were quite excited and eager to share the info and their culture. I inquired from one of the developers about his beer tastes and whether he drank it or not. Again myself being from overseas, which is not too far off from this location, exchanged a few other stories and got a little more personal. I think this really helps in creating a more friendly and understanding atmosphere.
Don't be a dictator
One of the lessons, no matter how smart you are (or think you are), hopefully learned is to cultivate a shared development environment. Recently the off-shore team went ahead without checking with us and completed major refactoring. That caught me off-guard, but there is a lesson in that. First, the project needed refactoring and I was reluctant to do that right away due to another pending major integration (back-end) process. But that was all right. We regrouped and just this morning we had a teleconference call and discussed our refactoring process where I think everyone got on the same page. I did answer a few follow-up clarification questions but everyone is on the same page.
Let them have time to ramp up and study
In this project we are using ASP.NET MVC framework. Early in the project, one of the developers decided to take the M (Model) from MVC of the web project into an outside class library project. After checking and figuring out the reasoning, we found out that it was not such a good idea since ASP.NET MVC dictates convention over configuration. Yet the idea is to let the team find out and try to improve the code through refactoring and remodeling. My advice to team was, okay, there is another way of doing, but prove that why that way is better and if it makes the most sense.
Respect gets respect
This goes without saying, but it is better to state it anyway. Just because you are supervising, there is no point in being arrogant or bullish when discussing issues with the team. Everyone has opinion and when time comes, it is best to discuss. Try to understand where the team is coming from and give them respect for their idea and creativity. There is always more than one of peeling the cat ;-)
Respect gets respect
This goes without saying, but it is better to state it anyway. Just because you are supervising, there is no point in being arrogant or bullish when discussing issues with the team. Everyone has opinion and when time comes, it is best to discuss. Try to understand where the team is coming from and give them respect for their idea and creativity. There is always more than one of peeling the cat ;-)
Try to stay ahead of them
Okay, this is my current challenge right now. We added two additional developers and it is getting harder for me to stay ahead, but I will keep reviewing the code base and provide feedback as it fits. I also have the other integration project which is ramping up as well. Anyway that is what I got to do. But the point is, it is important to stay ahead of them and give them direction and try to anticipate the changes in business and requirements. The one thing that is certain is things are going to change. Be responsive and reactive to changes, hopefully proactive more.
There will be many more tasks for us to do in this project. We have an agressive timeline, but using agile methodologies, specifically Scrum really fitst the model. We modified the process a little to meet our needs but it is going through. Our business analysts are creating documentation everyday which the development team is consuming. The TDD approach is really getting up there. Just today we got notification that our initial code coverage setups and implementation are taking place. These are great signs of progress and will hopefully allow us to deliver this product on time and within budget. I am stoked to work with these bright guys and I am glad to mentor and learn from them as project moves forward. But understanding each others' culture, either onshore or offshore and working on the communication skills are essential to what we do which is creating functioning software applications for healthcare business.
References:
Labels:
Agile,
ASP.NET MVC,
off-shore teams,
offshore,
remote teams,
scrum
Wednesday, May 27, 2009
It Depends...
Wonder how many times you heard this as an answer to a specific question by an expert of the matter. Scratched your head about what it means and question their authority. Yes?And it is true in many respects too given the choices we have around our technologies. That is a good thing, right? Well, guess what... it depends...
Recently I have been tasked with creating a technical specification document for a given application module. I delivered the document knowing what I know at the time and tried to avoid Big Design Up Front (BDUF) anti-pattern. I have to admit there are quite a few holes in my document but my goal is to fill up them as I find more about the requirements and increase my understanding of the application. So far we have is a few screens of wire-frames, an initial functional specification document. The business analysts are hard at work in peeling those requirements and creating/modifying the functional specifications. It's all good in the end as long as we have a grasp on the amount of the documentation. And, yes, we are using an agile methodology and have daily scrum meetings and two week Sprints. We are staying focused.
Including myself, the team is very new to this company. What are some of the challenges? Well, lack of requirements one might say, but I am yet to be on a project where this isn't the case. The second that you think you nailed down the requirements, guess what happens. It changes! Head First Object Oriented Analysis and Design book had a good chapter for this, “You're perfect. Now change!” Exactly. So I am not too hung on this. Just get me enough to start coding...
There is some worry about the business process, and lack of decisions. We are also going to use an off-shore team as well to do some of the development and testing. I expect to be working closely with this team at some level. I have a few questions and concerns with that, but I will bring it as it moves forward. Keeping development cycles short, feed-back loops tight and close. Wire-frame developers griped a little about not seeing the whole picture, but sometimes you don't need to.
As far as technologies go, I am all about .NET development which is our platform. This is a web-based application with multiple tiers, heavy emphasis on separation of concerns, testable, decoupled layers. There will be WCF services used by the business tier as well as third parties. I am pushing ahead with ASP.NET MVC framework with business objects (logic) in the middle, and ADO.NET Entity Framework in the data access layer. We are going to use Test Driven Development, writing our tests first, thinking “unit testing”. I realize that TDD is not unit-testing, but I am already spreading the words/ideas about the functionalities to test. I really believe this will help us increasing our code-coverage numbers as well as CCHIT requirements fulfilled. My goal is when some one asks me how this works, I could just run a few unit/integration tests and show that they meet the requirements. Yes, got to have tests... there is no other way around that!
Meantime I plan to write quite a few throw-away proof of concept code to aid the actual coding. I plan to address the cross-cutting concerns, exception management, authorization, authentication, logging, caching (defer it until I have more hands on with application), encryption and security.
Our database is SQL Server with some interfaces to legacy DB2 databases, We do have an aggressive time-line, as with most development teams. We are definitely going to be busy. Well, if you are going to ask me how is it going? You will know my answer...
Happy Coding!
Recently I have been tasked with creating a technical specification document for a given application module. I delivered the document knowing what I know at the time and tried to avoid Big Design Up Front (BDUF) anti-pattern. I have to admit there are quite a few holes in my document but my goal is to fill up them as I find more about the requirements and increase my understanding of the application. So far we have is a few screens of wire-frames, an initial functional specification document. The business analysts are hard at work in peeling those requirements and creating/modifying the functional specifications. It's all good in the end as long as we have a grasp on the amount of the documentation. And, yes, we are using an agile methodology and have daily scrum meetings and two week Sprints. We are staying focused.
Including myself, the team is very new to this company. What are some of the challenges? Well, lack of requirements one might say, but I am yet to be on a project where this isn't the case. The second that you think you nailed down the requirements, guess what happens. It changes! Head First Object Oriented Analysis and Design book had a good chapter for this, “You're perfect. Now change!” Exactly. So I am not too hung on this. Just get me enough to start coding...
There is some worry about the business process, and lack of decisions. We are also going to use an off-shore team as well to do some of the development and testing. I expect to be working closely with this team at some level. I have a few questions and concerns with that, but I will bring it as it moves forward. Keeping development cycles short, feed-back loops tight and close. Wire-frame developers griped a little about not seeing the whole picture, but sometimes you don't need to.
As far as technologies go, I am all about .NET development which is our platform. This is a web-based application with multiple tiers, heavy emphasis on separation of concerns, testable, decoupled layers. There will be WCF services used by the business tier as well as third parties. I am pushing ahead with ASP.NET MVC framework with business objects (logic) in the middle, and ADO.NET Entity Framework in the data access layer. We are going to use Test Driven Development, writing our tests first, thinking “unit testing”. I realize that TDD is not unit-testing, but I am already spreading the words/ideas about the functionalities to test. I really believe this will help us increasing our code-coverage numbers as well as CCHIT requirements fulfilled. My goal is when some one asks me how this works, I could just run a few unit/integration tests and show that they meet the requirements. Yes, got to have tests... there is no other way around that!
Meantime I plan to write quite a few throw-away proof of concept code to aid the actual coding. I plan to address the cross-cutting concerns, exception management, authorization, authentication, logging, caching (defer it until I have more hands on with application), encryption and security.
Our database is SQL Server with some interfaces to legacy DB2 databases, We do have an aggressive time-line, as with most development teams. We are definitely going to be busy. Well, if you are going to ask me how is it going? You will know my answer...
Happy Coding!
Labels:
Agile,
anti-pattern,
ASP.NET MVC,
BDUF,
It Depends,
TDD
Wednesday, April 8, 2009
Introduction to HL7
As part of my new role and job, I am getting up-to-speed with health care information systems. A few backs during our TechMasters' meeting, I spoke briefly about HL7, but could not find enough material to summarize what it is. As I researched more, I found more material. So what is HL7?
HL7 stands for Health Level Seven(7). Why is seven? Accordingly the network protocol, if you have ever studied computer networks, the OSI (Open System Interconnection) model is typically conceived with seven layers. The seventh layer correlates to the application layer. Hence the HL7 focuses on the issues occur at the seventh layer.
The standard deals with interfaces between systems that send/receive patient admissions/registrations, admission, discharge or transfer (ADT) data, queries, resource and patient scheduling, orders, results, and clinical observations, billing, master file update information, medical records, scheduling, patient referral and patient care. It's an effort to let disparate applications and data architectures in heterogeneous system environments operate and communicate with each other.
A typical hospital today have its own (proprietary, from a specific vendor) computer systems for admission, discharge and transfer, billing and accounts receivable applications. These systems may have been designed in a centralized or a distributed architecture. Overtime, these applications are re-written or patched to meet current computing needs. Remember Windows 95? The security model which was perceived then is not what we use today. Now think Windows Vista's security model. If you put this in terms of health information systems, you will see a similar progression. However, often the application development cycles are not well-defined, lots of patching and updating have to occur. Having many vendors with their own protocols and applications, there raises a big need for interoperability.
A framework is needed to minimize incompatibility and maximize information exchange between systems. HL7 is deemed as a superstructure in this environment to facilitate a common specification and specifications methodology.
So far, there has been three versions of HL7. The first version was created in 1987 and subsequently the second version came out in 1988, but version 2 had a few revisions. HL7 version 3 came later (time frame?), and it's XML based. I plan to blog more about HL7 versions separately later on.
To summarize HL7 and what it tries to accomplish, here are some bullet points:
After the initial posting of this blog, I have been followed by "HL7-Tools", and it was mentioned that a link to their resources can be added. Certainly. Here is another resource for learning and practicing HL7 Tools, Utilities and Resources. Thanks for following.
HL7 stands for Health Level Seven(7). Why is seven? Accordingly the network protocol, if you have ever studied computer networks, the OSI (Open System Interconnection) model is typically conceived with seven layers. The seventh layer correlates to the application layer. Hence the HL7 focuses on the issues occur at the seventh layer.
The standard deals with interfaces between systems that send/receive patient admissions/registrations, admission, discharge or transfer (ADT) data, queries, resource and patient scheduling, orders, results, and clinical observations, billing, master file update information, medical records, scheduling, patient referral and patient care. It's an effort to let disparate applications and data architectures in heterogeneous system environments operate and communicate with each other.
A typical hospital today have its own (proprietary, from a specific vendor) computer systems for admission, discharge and transfer, billing and accounts receivable applications. These systems may have been designed in a centralized or a distributed architecture. Overtime, these applications are re-written or patched to meet current computing needs. Remember Windows 95? The security model which was perceived then is not what we use today. Now think Windows Vista's security model. If you put this in terms of health information systems, you will see a similar progression. However, often the application development cycles are not well-defined, lots of patching and updating have to occur. Having many vendors with their own protocols and applications, there raises a big need for interoperability.
A framework is needed to minimize incompatibility and maximize information exchange between systems. HL7 is deemed as a superstructure in this environment to facilitate a common specification and specifications methodology.
So far, there has been three versions of HL7. The first version was created in 1987 and subsequently the second version came out in 1988, but version 2 had a few revisions. HL7 version 3 came later (time frame?), and it's XML based. I plan to blog more about HL7 versions separately later on.
To summarize HL7 and what it tries to accomplish, here are some bullet points:
- HL7 is not a "complete solution", rather it provides a common framework for implementing interfaces between disparate vendors
- Protection of Health care information
- Roles and relationships, such as patients, physicians, providers.
- Accountability, audit trails
- Uniform data definition and data architecture
- Integration of health record
- Interface engines
- Rules engines
After the initial posting of this blog, I have been followed by "HL7-Tools", and it was mentioned that a link to their resources can be added. Certainly. Here is another resource for learning and practicing HL7 Tools, Utilities and Resources. Thanks for following.
Monday, March 23, 2009
Remember... it all starts small
Recently I have been working on a Windows client VB.NET project. With my client, we took some time to re-design the solution, project files and created layers to handle certain tasks, decoupled interfaces. I took charge of creating a simple database to support configuration and automatic/batch processing of certain tasks.
I opted to use Linq-to-SQL, but still have a separate business layer, and data access objects to talk to the UI. I still favor this approach. It is going slow, but working nicely.
Now couple of sticking points... first of all, I did underestimate the time to integrate the database/business object layer to the existing UI. This is a very small project, at least on my side, don't have any software specification. Made the database model after looking at the existing XML files. Used a Linq-to-Sql Dbml file to generate the model in the code, implemented CRUD's with TDD. Created a test harness, and that good stuff...
The biggest problem is lack of time and testing the UI. Since I took over an existing UI, it wasn't all clear what each button would do, what the states are, where is it going to be stored and such.
Looking a week or so back, what I should have done prior to giving an estimate was put down the UI as I see it, map out the functionality and give a more granular & accurate estimates on how long each task takes. You know you can always write a piece of functionality pretty quickly. But to test it, integrate with the other components of the project, take time and need to be refactored.
And communication is also key. My client is also pressed in time and resources and is running into issues with code integration. We are having a few problems and I am trying to help out as much as I could. Last week not being available for a quick call and not getting feedback threw me off and it was hard to get back to it after that.
On top of it, I got busier with my main task, finding another full time position. At the end of February, the consulting company I have been with laid me off along with a few other consultants due to economic downturn and lack of projects. Fortunately, I was offered a full time position last Thursday and expecting to start full time next week. This is obviously good news and sigh of relief in this economy. I would like to thank all my friends and family during this transition. Their support and friendship really made it happen and gave me focus.
Meantime, I really appreciate the fact that I have a side project to work on and yet stayed (staying) busy, however need to do a better job with giving work estimates. And remember it all starts small...
I opted to use Linq-to-SQL, but still have a separate business layer, and data access objects to talk to the UI. I still favor this approach. It is going slow, but working nicely.
Now couple of sticking points... first of all, I did underestimate the time to integrate the database/business object layer to the existing UI. This is a very small project, at least on my side, don't have any software specification. Made the database model after looking at the existing XML files. Used a Linq-to-Sql Dbml file to generate the model in the code, implemented CRUD's with TDD. Created a test harness, and that good stuff...
The biggest problem is lack of time and testing the UI. Since I took over an existing UI, it wasn't all clear what each button would do, what the states are, where is it going to be stored and such.
Looking a week or so back, what I should have done prior to giving an estimate was put down the UI as I see it, map out the functionality and give a more granular & accurate estimates on how long each task takes. You know you can always write a piece of functionality pretty quickly. But to test it, integrate with the other components of the project, take time and need to be refactored.
And communication is also key. My client is also pressed in time and resources and is running into issues with code integration. We are having a few problems and I am trying to help out as much as I could. Last week not being available for a quick call and not getting feedback threw me off and it was hard to get back to it after that.
On top of it, I got busier with my main task, finding another full time position. At the end of February, the consulting company I have been with laid me off along with a few other consultants due to economic downturn and lack of projects. Fortunately, I was offered a full time position last Thursday and expecting to start full time next week. This is obviously good news and sigh of relief in this economy. I would like to thank all my friends and family during this transition. Their support and friendship really made it happen and gave me focus.
Meantime, I really appreciate the fact that I have a side project to work on and yet stayed (staying) busy, however need to do a better job with giving work estimates. And remember it all starts small...
Friday, February 13, 2009
putting it altogether
In the last few days, I have been diving into a variety of .NET technologies. This should be no surprise to anyone in this industry if one wishes to stay current and employed. Especially in this economy. So there it goes...
Well, I started dabbling with Silverlight 2.0. Creating XAML pages, simple Silver application which is available once you install the Silverlight 2.o SDK. The place to start is Learn Silverlight. Creating circles and elipses, buttons, stackpanels and grids on usercontrol is cool, but does not help my business. So after reading a few chapters in one certain book and watching a few videos from the official site, I have decided to give it a go.
As it turns out, Silverlight was just the prelude and wetted my appetite to dig deeper. Since I am data oriented, I started creating a SQL Server database in SQL Server express. Created a four-table data model which I will provide details later, but I just want to summarize the work I have done as of this Friday afternoon.
With the database in place, created a Data Access Layer project in Visual Studio, using C# (.NET framework 3.5). According to the feedback and posts I have read, I felt inclined to use Entity Framework. As you may know, Entity Framework is available in Visual Studio 2008 SP1. Setting this object/file up in Visual Studio is pretty easy, but I have ran into a few snags in my data model, especially with the associations. And my version of Visual Studio does not open the modeller at first, but there is a trick which I found out. If I open and close the file with the XML editor, the modeller becomes available.
With the model in place, I wrote data object classes to enable basic CRUD operations. I made sure I created a separate test project/class to run my unit, I mean integrity tests. After the model checked out fine, I moved on the next stage.
Next was creating a WCF service which exposes the above data object. I created separate classes and host the service on the local IIS (running on XP, so IIS is 5.1). Have to remind myself of certain WCF command line utils.
Then I created a WCF client to test the service I hosted on IIS. Used the "svcutil" utility to create a proxy class and included the app.config file. My client was a simple console application, don't include the service client. So far so good.
Then early this afternoon, I watched another video for consuming WCF services using Silverlight. There were a few things I have to do and I have a few gripes about why certain things are done that way, but again will talk about that later. Oh, have I mentioned I am using ASP.NET MVC as my web.host project. So hooking up separate XAML pages (or) I should, was fun.
I have really worked on each individual component separately and just an hour ago, I was able to put it altogether. I got it working. And of course, there is a lot more work. I will update this post as I find time. Thanks for reading.
Well, I started dabbling with Silverlight 2.0. Creating XAML pages, simple Silver application which is available once you install the Silverlight 2.o SDK. The place to start is Learn Silverlight. Creating circles and elipses, buttons, stackpanels and grids on usercontrol is cool, but does not help my business. So after reading a few chapters in one certain book and watching a few videos from the official site, I have decided to give it a go.
As it turns out, Silverlight was just the prelude and wetted my appetite to dig deeper. Since I am data oriented, I started creating a SQL Server database in SQL Server express. Created a four-table data model which I will provide details later, but I just want to summarize the work I have done as of this Friday afternoon.
With the database in place, created a Data Access Layer project in Visual Studio, using C# (.NET framework 3.5). According to the feedback and posts I have read, I felt inclined to use Entity Framework. As you may know, Entity Framework is available in Visual Studio 2008 SP1. Setting this object/file up in Visual Studio is pretty easy, but I have ran into a few snags in my data model, especially with the associations. And my version of Visual Studio does not open the modeller at first, but there is a trick which I found out. If I open and close the file with the XML editor, the modeller becomes available.
With the model in place, I wrote data object classes to enable basic CRUD operations. I made sure I created a separate test project/class to run my unit, I mean integrity tests. After the model checked out fine, I moved on the next stage.
Next was creating a WCF service which exposes the above data object. I created separate classes and host the service on the local IIS (running on XP, so IIS is 5.1). Have to remind myself of certain WCF command line utils.
Then I created a WCF client to test the service I hosted on IIS. Used the "svcutil" utility to create a proxy class and included the app.config file. My client was a simple console application, don't include the service client. So far so good.
Then early this afternoon, I watched another video for consuming WCF services using Silverlight. There were a few things I have to do and I have a few gripes about why certain things are done that way, but again will talk about that later. Oh, have I mentioned I am using ASP.NET MVC as my web.host project. So hooking up separate XAML pages (or
I have really worked on each individual component separately and just an hour ago, I was able to put it altogether. I got it working. And of course, there is a lot more work. I will update this post as I find time. Thanks for reading.
Subscribe to:
Posts (Atom)