In my last post (read it here) I showed the results of a new documentation process for CruiseControl.NET. The aim was to move the documentation from being split over two different locations (code and the wiki) into one (the code) and to make it easier for developers to write documentation (plus it also standardises the documentation!)
To do this I have used doc comments in the code (yes it bloats the code, but with Visual Studio these comments can be collapsed). Most of the time I tried to use the standard documentation tags (e.g. summary, remarks, example, etc.), but unfortunately the standard tags don’t cover all we need
The good news is when Microsoft wrote the doc comment extraction application they didn’t enforce the tag types. This means we can add additional tags and they will happily be extracted. Of course, we don’t want to go overboard as the intellisense in Visual Studio doesn’t know about them (does anyone know of how to add new doc comment tags to Visual Studio?)
The new tags I have added for classes are:
|title||An optional title for the item – otherwise it will default to the name in the ReflectorType attribute.|
|version||The version that this is available from.|
The new tags I have added for fields and properties are:
|default||The default value if this element is omitted.|
|version||The version that this is available from.|
Additionally I expanded the code tag (this already exists) to have a title attribute. This will apply the title to the code example block – Confluence allows code titles, so it nicely ties the two together.
The documentation generator mainly uses reflection for the documentation. However this generates the bare bones of each article. For example, this is what the state element looks like:
This is because this class has no doc comments (none at all!) associated with either the class or the fields/properties that are reflected.
Adding the summary tag adds the description:
Adding a title tag:
And a version tag (this was a guess):
Finally, to add an example I used the example tag (with the new title attribute):
Finally the configuration element data is added by adding the default and version tags to the actual property definition:
Notice how this was a progressive enhancement of the documentation. If we added a remarks tag to the class documentation we would get a notes section (this element doesn’t have any notes at the moment).
The doc comments for the class look like this:
/// <summary> /// The File State Manager is a State Manager that saves the state for one project to a file. The /// filename should be stored in either the working directory for the project or in the explicitly /// specified directory. The filename will match the project name, but will have the extension .state. /// </summary> /// <title>File State Manager</title> /// <version>1.0</version> /// <example> /// <code title="Minimalist example"> /// <state type="state" /> /// </code> /// <code title="Full example"> /// <state type="state" directory="C:\CCNetState" /> /// </code> /// </example> [ReflectorType("state")] public class FileStateManager : IStateManager
And for the property:
/// <summary> /// The directory to save the state file to. /// </summary> /// <version>1.0</version> /// <default>The directory CCNet was launched from.</default> [ReflectorProperty("directory", Required=false)] public string StateFileDirectory
Where possible the document generator pulls the data from the attributes and property definition. For everything it tries to find the relevant doc comments – leaving them empty/blank if not found.
What do you think so far?
I’ll do one more post – how the document generator works from an external viewpoint, and how to get the documentation into the wiki.
Yes, it didn’t take me very long
In my last post (here) I said I was going to go through all the documentation and ensure that it is:
- Up to date
For the first step in this task I generated a list of all the types in the Remote, Core and WebDashboard projects that had the ReflectionType attribute (this means a type can be used in the configuration).
This identified 176 classes that needed to be checked. Now not all of these classes are used as tasks, triggers, source control blocks, plug-ins, etc., some are ancillary types to provide sub-information (e.g. named values, user details, etc.)
This data was dumped into an Excel spreadsheet, so I could go through each and every class to check the documentation. As you might have guessed, this didn’t last very long…
The first class I checked was DefaultQueueConfiguration – which maps to the queue tag. Now, as classes go, this one is fairly simple – only three properties that are reflected. But, after half an hour of flicking backwards and forwards between the documentation and the code I was thinking why do we have the data split over two places?
Instead I copied all of the documentation from the wiki into the code as doc comments (including some reformatting). This means everything is in one place now – yay!!! However, this doesn’t answer the problem of documentation – it just means duplication of data
So, rather than do more documentation I quickly put together a little application that used the XML comments and reflection to automatically generate the documentation. Now the code is back to being the single source of truth, plus the documentation will now be more consistent (since it is automatically generated).
Here are some before and after shoots of how the documentation looks (they are big, so click on the thumbnail to see the full version):
Now I am back to the process of reviewing documentation – in a future post I explain how the process works and what is required.
As part of the preparation for the 1.5 release (hopefully soon) I thought I’d go through and review the documentation to ensure it is:
- Up to date
All the documentation for CruiseControl.NET is stored at http://confluence.public.thoughtworks.org/display/CCNET. This is an online wiki with its own formatting mark-up (it uses Confluence).
Now, the question arises of how to review the documentation?
For CruiseControl.NET the source of truth is the code, so I need to go through both the documentation and the code to ensure everything matches. What would be really nice is if everything was documented in the code – but I won’t comment on what our documentation comments are like.
So, that leaves a nice challenge of checking everything manually.
Basically, I’m planning on doing it two ways. First, I will go through the code and verify all the items in the documentation. Hopefully this will cover most of the pages in the wiki. Once I’ve done that I will then go through the remaining pages and check them.
Now for the bad news, I’ve extracted all the classes in CruiseControl.NET (from the Core, Remote and WebDashboard assemblies) and there are 176 classes that can be reflected – which means 176 classes that can be included in the documentation.
So, this may take some time…
Notes: As a side task I am also planning on importing the documentation from the wiki to the code. This means in future we should be able to automatically verify some of the documentation (hopefully!)
Some New Stuff, Some Old Stuff
Like any system, documentation is an important part of CruiseControl.NET. With the 1.5.0 release, we’re adding a lot of new functionality – all of which needs documentation. So I’ve taken some time out of resolving issues and adding code coverage to document some of the new things that are coming.
As well as documenting the new stuff, I’ve also been updating some of the older documentation, although this is mainly around adding what has changed. Some of our changes has had a major effect on all the tasks and publishers. But I’ve tried to also add some value to the documentation around versioning.
Finally, I’ve also been trying to add some more contextual documentation. A large part of the current documentation describes the various configuration options, and what can be set. We also have some stuff on using the system, including with other tools and components, and a little bit of development documentation. Basically, I’m trying to fill out the last two sections a bit more!
We have lots of new stuff for 1.5.0, some of which has been requested for years! In terms of configurable items, we have added the following:
- Security (big change)
- Dynamic values and parameters (wide-ranging change)
- New tasks and publishers (parallel, sequential, conditional, NCover, PowerShell and FTP)
- New source control providers (RoboCopy, VSTS, FTP, Git and Mercurial)
Most of which have now been documented (just the RoboCopy and VSTS source control blocks outstanding).
Before I move on, a quick comment on the security. This is a very big change, and one that has a lot of associated documentation. I’ve gone through and added the basics on all the configuration options, but as Ruben pointed out, the sheer volume of configuration for this area can make things very confusing. As such, I’ll be going through and adding more over time.
One of the widest-ranging changes is the additional of dynamic parameters. Nearly every single task and publisher can use them! As such I’ve gone through and added this to all the tasks and publishers that can use them.
Additionally, we have an issue where we have only one documentation wiki – which has to handle all versions of CruiseControl.NET. To try and make sense of what works in various versions, I have added two additional items.
First, all new items have a Version section. This section says when the item was first added (e.g. 1.4.4, 1.5.0, etc.) This should help people who want to try a new item, but they are still using an older version. Of course, there are a huge number of items that have been around for a long time – these I have not documented. Basically, if there no Version section, then it will work in 1.4.0, not sure about older versions!
Second, to the configuration elements table I have added a Version column. This works in the same way as the Version section, it says when the element was first added. Again, since there is a lot of old stuff, not everything will have a version number. Perhaps if I am bored one day…
Since both security and dynamic values and parameters are new, I have tried to explain what they are for, how they work and what some of the possibilities are. For security, I’ve started adding some scenarios of how it can be used. Actually, at the moment there is only one, but there are two others in this blog that I need to update and migrate over.
Finally, I’ve also started writing about how security can be extended. When I designed it, I added all sorts of interfaces for different things. So, if you don’t like what I have done, it should be easy enough to swap out a part and use your own implementation.
While I’ve added a lot to the documentation, I don’t consider it anywhere near finished yet!!! here are some of the things I would like to see done around documentation:
- Review everything: a lot of the configuration documentation is inconsistent – not surprising considering the number of people who have worked on it over time – it would be nice to go through and put together a consistent style (e.g. what is included, etc.) through the entire documentation.
- Client apps: there is some documentation on these, but a lot of it is out of date, especially considering the number of things that have changed.
- Developer documentation: my first few months on CruiseControl.NET was just trying to figure out what is what, heck, even now I am still confused at times!
Anyway, these are just my thoughts, at least we have some documentation!!!
So, in closing, please take a look at some of the new documentation and let me know what you think. As always, I’m too close to the trees to see the forest
On the mailing list there was a question today about the parts of the dashboard and how they match up with the configuration. So I thought it’s time for me to write down what I know about the parts of the dashboard and how they work.
First, there are four basic levels to the dashboard. These levels are:
- Farm – the areas of the dashboard that do not relate to any build server
- Server – the overall summary of a build server (e.g. an instance of the CC.NET service or console)
- Project – a project on a build server
- Build – an instance of an integration for a project on a build server
Each level has its own set of plug-ins, as would be expected since they show very different information. To show these levels, here is a screen shot of each.
Note: These are running on my development instance of CC.NET, hence they don’t have a proper version number or very many plug-ins installed.
Farm Level View
Server Level View
Project Level View
Build Level View
It is important to know about these levels, because putting a plug-in in the wrong location will put items in an unexpected place!
The Parts of a Page
Every page has four basic parts – and a plug-in only has control over one part. These four parts are:
The header and the footer are both generated by the main page template. These contain common items to every page, such as the name and version of CC.NET, common links, breadcrumbs and when the date/time the page was rendered. These do change slightly depending on the level, but this is outside the control of a plug-in.
The side bar contains a set of links. Again, these vary slightly depending on the level, but they are controlled by the system again. However, these links are defined in the configuration – each plug-in exposes one or more links to add to the side bar. More on this in a little while.
Finally, there is the content area. This is entirely up to the plug-in to populate. The system completely ignores the content area and assumes the plug-in will generate meaningful content. This content is accessed by clicking on one of the links in the side bar.
Overlying these parts onto a farm level view looks like the following:
Where the Config Comes In
Hopefully by now, you will have some ideas of where things fit. But let’s make things perfectly clear and delve into how the configuration relates to the pages.
First, here is the configuration I used to generate the above screen shots:
1: <dashboard>2: <remoteServices>3: <servers>4: <server name="local" url="tcp://localhost:21234/CruiseManager.rem" allowForceBuild="true" allowStartStopBuild="true" backwardsCompatible="false" />5: </servers>6: </remoteServices>7: <plugins>8: <farmPlugins>9: <farmReportFarmPlugin />10: <cctrayDownloadPlugin />11: <administrationPlugin password="********" />12: </farmPlugins>13: <serverPlugins>14: <serverReportServerPlugin />15: </serverPlugins>16: <projectPlugins>17: <projectReportProjectPlugin />18: <viewProjectStatusPlugin />19: <latestBuildReportProjectPlugin />20: <viewAllBuildsProjectPlugin />21: </projectPlugins>22: <buildPlugins>23: <buildReportBuildPlugin>24: <xslFileNames>25: <xslFile>xsl\header.xsl</xslFile>26: <xslFile>xsl\modifications.xsl</xslFile>27: <xslFile>xsl\NCoverSummary.xsl</xslFile>28: </xslFileNames>29: </buildReportBuildPlugin>30: <buildLogBuildPlugin />31: <xslReportBuildPlugin description="NCover Report" actionName="NCoverBuildReport" xslFileName="xsl\NCover.xsl"></xslReportBuildPlugin>32: </buildPlugins>33: <securityPlugins>34: <simpleSecurity />35: </securityPlugins>36: </plugins>37: </dashboard>
Like I said, it is very simple. The part that we are interested in the the <plugins> section. This defines all the plug-ins that can be seen. For the moment, I’m going to ignore the <securityPlugins> as this is specific to 1.5.0 and I’ve already covered it in a previous post.
There are four sections of plug-ins: <farmPlugins>, <serverPlugins>, <projectPlugins> and <buildPlugins>. These map to each level in the dashboard. Within each section there are one or more plug-ins. These plug-ins define the links that appear in the side bar. For example, in the <farmPlugins> section:
- <farmReportFarmPlugin> maps to the “Farm Report” link
- <cctrayDownloadPlugin> maps to the “Download CCTray” link
- <administrationPlugin> maps to the “Administer Dashboard” link
Most plug-ins have these link titles as hard-coded values within them – so they can’t be changed. Clicking on a link will pass control to the plug-in, which generates the content to be displayed (this is actually a simplification, but it will do for this post.)
Normally, there is only one instance of a plug-in per section – having multiple plug-ins generates some unexpected results, so it is recommended against. However, some rules are made to be broken.
Breaking the Rules – <buildPlugins>
The above details cover most plug-ins in the dashboard. However, the one area where the rules get broken is in the <buildPlugins> section. This section defines two special plug-ins - <buildReportBuildPlugin> and <xslReportBuildPlugin> (1.5.0 will be adding a third – <htmlReportPlugin>).
First, the <xslReportBuildPlugin> element. This breaks the rules by allowing multiple instances. It does this because of two properties: description and actionName. The description is the text of the link to appear in the side bar, while the actionName is the command name to be passed to the server. The actionName MUST be unique, otherwise the poor dashboard will get confused!
This plug-in takes in a XSL-T template and transforms the build log for a project into an HTML report (which is the third parameter in the element). This means we don’t need to develop lots and lots of plug-ins (e.g. one for each report), instead we can just write a style sheet and get it to transform the results.
The second rule breaker is <buildReportBuildPlugin>. This is a required plug-in and there can only be one instance of it. The reason it is different is it has an <xslFileNames> section in it. This section is similar to the xslFileName attribute in the <xslReportBuildPlugin>, but it has one major difference. The <xslReportBuildPlugin> generates a link in the side bar, the <xslFileName> element doesn’t. Instead, it’s transform gets merged into one big page – the “Build Report”.
The following picture shows how these relate:
Clicking on an <xslReportBuildPlugin> will generate a completely different content area, one that is not affected by <buildReportBuildPlugin> at all.
There are four general levels – farm, server, project and build. Each has its own config section and allows a different set of plug-ins.
Within each page, there are four areas – header, footer, sidebar and content. The plug-in generates the content area and defines links to go into the sidebar – everything else is handled by the system.
For the build level, there can be multiple <xslReportBuildPlugin> – each defines a link in the side bar, with custom content. The <xslReportBuildPlugin> section defines the items to appear within the “Build Report” – these do not appear as links within the side.
Hopefully this provides a better understanding of the parts of the dashboard.
The Heart of the Matter
The core of CruiseControl.Net is the project scheduler. This is the piece of code that is responsible for scheduling builds – without which CruiseControl.Net just wouldn’t work.
Before I delve into the actual workings of the scheduler, let’s quickly review how builds can be scheduled.
First and foremost, in order to schedule a project build a trigger is required. There are a number of different types of trigger – from interval triggers to scheduled triggers to projects that monitor other locations or projects. Additionally triggers can be combined or filtered. But one thing all filters have in common is they tell the scheduler a build needs to be performed.
The second part of the scheduler is the queues. Early versions of CruiseControl.Net had each project running in its own little world – they didn’t affect other projects. As the number of projects increased, this lead to increased contention on the build servers and a lack of resources. To counter this queues were added. A queue is a group of projects, of which only one can have a running build at any point in time. Generally they work on a first-in, first-out basis, but it is possible to set queue priorities.
With this background, let’s delve into how project scheduling actually works
Managers, Integrators and Queues
The main class that handles everything in CruiseControl.Net is CruiseServer. This is responsible for starting everything and handling all user interactions. But, it doesn’t actually handle the scheduling of builds. This is handled by a number of other classes.
Looking at CruiseServer, there is a IntegrationQueueManager class. This encapsulates all the actual projects and their integrators. Now an integrator implements IProjectIntegrator and is responsible for the actual triggering of a build. In a moment I’ll return to how it does this, but first, how is an integrator started.
When IntegrationQueueManager is instantiated it iterates through all the projects and ensures that there is a queue for each project. If there is no queue, then it creates a new queue with the same name as the project. These queues are all added to an IntegrationQueueSet.
Once the queues have been initialised, the IntegrationQueueManager then calls ProjectIntegratorListFactory to generate all the project integrators. As well as containing the project configuration, the integrator also contains a reference to the associated queue. At the moment there is only one IProjectIntegrator – ProjectIntegrator.
This completes the initial setup of the queues and integrators, the next step is to start the projects integrating. This is done by calling the StartAllProjects() method (by CruiseServer), or by calling Start() for a specific project (StopAllProjects() and Stop() do the opposite).
When StartAllProjects() is called, it iterates through all the project integrators and checks to see whether the project can start. This involves checking the configuration and then the state persistence (both new in 1.4.3). If both these checks start, then the integrator is started. The actual starting of the integrator is done by calling the Start() method on the integrator.
Nice and simple, but here’s a diagram to illustrate this process:
The above initialisation got to the point of calling Start() on ProjectIntegrator. This method starts a new thread that contains a polling loop. This loop checks to see if there is an integration every 100ms. If there is an integration it then calls the Integrate() method on Project which performs the actual integration (e.g. pre-build, source control, tasks and publishers).
This check consists of two parts. First it checks the queue to see if there is a pending integration request. If there is a request, it locks any queues that need to be locked, starts a new request and calls the Integrate() method. After this it cleans up and exits the check logic.
The second part of the check, which is only performed if there is no pending request, is to check all the triggers. Each trigger has a Fire() method, which performs the actual check. The output of the fire method is an integration request or null – if the output is not null then it gets added to the queue.
The root level trigger is a combination trigger, which merely iterates through each child trigger and calls its Fire() method – it is up to each child trigger whether it returns a request or not.
Once the trigger checking has finished, it checks the queue to see if the request is next. If not, it enters a loop until the request is ready. Then when the request is ready, the check logic finishes – it doesn’t actually call Integrate() after the trigger checking. What happens is the polling loop goes through another cycle and then the integration request gets returned from the first part of the check.
The following diagram shows this:
The final piece of the puzzle is the integration queues. I’ve already mentioned them a couple of times, but let’s pull them apart and see how they actually work.
First of all, queues do not use the built-in queue classes – they use a List<> instance instead. The reason for this very simple – they do more than just adding and removing items – they also allow re-ordering (based on priorities). Plus items on the queue are not removed until completed – which would cause issues with de-queuing.
In turns of how they work. When an item comes in, the queue checks to see if it already exists. If the item exists it applies any re-ordering rules (ignore, re-add or replace existing), otherwise it just adds it to the queue. It will look at any other items in the queue and then add it after the last item with the same priority, or before the next highest priority.
When the integrator checks for a request, it will always return the item in position 0 (the start of the list). This item remains there until the integrator performs its clean-up.
Finally, there are a few call-back methods that are used to synchronise the state between the queue and the integrator.
And that’s all there really is to queues – very, very simple. The following diagram shows how queues relate to the integration polling cycle.
This post has covered how project integrations work and how builds are scheduled.
The main driver for the process is IntegrationQueueManager, which responsible for initialising everything and then starting the actual integration cycle. The actual integration cycles are handled by a polling thread within each integrator.
The actual process for starting a build is controlled by both the integrator and the queue. The integrator checks the triggers to see if a build should be scheduled, and when one is found it adds it to its associated queue. Then, in the next polling cycle it retrieves the request off the queue and actually performs the integration.
The queues act to limit the number of project builds at any time, and do this by storing requests. The integrator will only perform a request when it is the first item in the queue.
This post is about how builds are scheduled – I haven’t covered how an actual build is performed. But that is an entirely different topic, so I will leave that for another time
Since I’ve been involved in coding for CruiseControl.Net I’ve spent a lot of time just trying to figure out how it works. Some things are straight-forward and can be changed with a minimum of fuss. Other things seem to obfuscated and are almost impossible to change.
However, given enough time – both to read the code and debug it – it’s possible to understand what is happening. And rather than forget all these things I’ve spent the time learning, I thought I’d write them down.
These posts will be technical and aim to cover different hard to understand parts of CruiseControl.Net. I’m going to try and cover all the components of CruiseControl.Net – server, CCTray and dashboard – with a focus on some of the more challenging areas.
As a background I’m going to assume people know C# and have a basic understanding of the different parts of CruiseControl.Net (Remote, Core, WebDashboard, CCTrayLib, etc.) plus have looked around the code some.
Mainly I’m going to focus on how these issues affect development. They will be built around breaking down a “problem” area and seeing how to work with the code. As such I’m more interested in the code rather than the functionality, although the two are related. So what I cover will have relevant to how CruiseControl.Net works, but the focus won’t be on it.
So stay tuned and feel free to give me any feedback – I promise I’ll listen
Going Beyond the Defaults
This post is something a little different from normal. Instead of delving into the code or exploring security, I want to take a little time out to play with the dashboard. I’ve been thinking about the different ways to customise the dashboard recently – some obvious and other not so obvious. So, this post is based on how I know the dashboard works, and provides four ways of modifying the dashboard to your needs.
At its heart, the dashboard uses an extensible plug-in infrastructure. This means it is easy to change the items that the dashboard displays (or even write your own, more on this later).
Here is a snapshot of the server display in the dashboard:
On the left-hand side of the page there are a series of options: “Server Report”, “View Server Log”, etc. These options are plug-ins that have been turned on, and they can easily be changed.
In the dashboard folder there is a file called dashboard.config. Opening this file shows an XML file with two major sections: remoteServices and plugins.
remoteServices lists all the CruiseControl.Net servers that the dashboard will monitor (yes, it is possible to monitor more than one). The only downside is the dashboard connects via .Net Remoting, so it needs to be inside the firewall.
plugins lists all the plug-ins that have been configured. These are broken down into sections – which map to the different areas in the dashboard. These areas are:
- farmPlugins: global area for the dashboard where no server, project or build has been selected
- serverPlugins: plug-ins for the server level
- projectPlugins: plug-ins for a specific project
- buildPlugins: plug-ins for a specific build
- securityPlugins: defines the allowed login options (version 1.5 or later)
Within these areas it is possible to do two things:
- Add or remove options
- Change the order in which the options appear
For example, in the snapshot above there are two security items: “View Security Configuration” and “View User List”. If someone has a non-secured installation of CruiseControl.Net there is no point in showing these (by default we normally turn these on for a default install). To remove these items is just a matter of deleting the entries from the config and restarting IIS.
The following snapshot shows this:
In this snapshot I have removed the security plug-ins and re-order the menu items slightly.
Note: The dashboard caches the configuration settings. This means changing the dashboard.config isn’t enough – IIS also needs to be restarted before the changes will be detected.
Styling with Style
Now, moving on to a slightly harder approach – modifying the CSS. Since the dashboard is just an HTML website it is possible to change the look-and-feel easily by changing the CSS file.
This file is called cruisecontrol.css and it sits in the main folder. It doesn’t cover every part of the dashboard but it does cover the major parts.
For example here is a version that has been styled red:
Unfortunately not every part is covered in the stylesheet, so some parts will stubbornly stay the same no matter how much the stylesheet is changed
I should also mention here that there are a couple of images that can be changed – the CruiseControl.Net logo at the top and the ThoughtWorks logo at the bottom (although I’m not sure on the legal rights of changing them).
These logos are located in the images folder under the dashboard and are called ccnet_logo.gif and tw_dev_logo.gif.
Templates, Templates, and Wait… More Templates
Underneath the hood the dashboard generates HTML (since it is a web application). However it doesn’t do this directly – instead it uses the nVelocity engine to take in a template and generate the HTML. This simplifies development and has an additional benefit: it allows the more adventurous administrator the ability to modify the templates.
These templates sit in the templates folder and there are quite a few of them. The main ones are:
- SiteTemplate.vm: The main page layout, including the header, version number and footer
- TopMenu.vm: the breadcrumbs underneath the main logo
- FarmSideBar.vm, ServerSideBar.vm, ProjectSideBar.vm, BuildSideBar.vm: the side menus for the various areas
- ProjectGrid.vm: the grid that is displayed at the farm and server levels
Modifying these requires a little knowledge of HTML, plus some time. While it is not possible to add new information (e.g. a description of the project, how many successful builds, etc.) it is possible to move things around and delete unwanted items.
As an example I have modified SiteTemplate.vm and ProjectGrid.vm to produce the following layout:
Here I have condensed the top of the page by removing the login and documentation links and moving the breadcrumbs to the right. I’ve also changed the side-bar to the right hand side and removed some of the columns from the grid.
Note: modifying the templates is completely unsupported and may cause problems with upgrades! However it does provide the ultimate in flexibility
The final customisation option on my list is also the most involved – writing a new plug-in. This is fairly straight-forward – at least in terms of what is required for the infrastructure. Since I’ve already posted on this topic before (read it here) I’m not going to cover it again, but it is worthwhile remembering that this is an option.
So, this post has covered a number of different ways to customise the dashboard – from simplest to hardest. There are bound to be other ways as well, so if my options don’t provide what is needed – keep looking.
Since this post security has been modified slightly. Please see this post for further details.
Welcome Back to Security
Here is the second in a set of scenarios on security. In my previous scenario (here) I looked at a small team, this time I’ll look at a large team.
The company Acme Banking, a large multi-national bank. Lu is the manager of the lending application software development department and has a staff of 19 people working for him. He has charged his system admin, Peter, with securing CruiseControl.Net.
Some background, Lu is responsible for three applications: lending applications system (LAS), load approval and tracking system (LATS) and bad debt analysis and recovery system (BARS). Each system is an independent web application with a complex set of business rules and a backing database. To allow interactions between the system each application exposes web services. Finally each application has a number of support tools that are Windows-based.
However, to simplify things, Lu’s department is only responsible for development. The QA department is responsible for all testing (including deployments) and the Server department is responsible for the actual deployment to PROD. All of these deployments are done via hand-overs, with the receiving department responsible for getting the binaries from the build server.
As for department structure, each team has a team leader and a senior developer. There are also between two and six junior developers in each team. Peter is not included in any of the teams as his full-time role is supporting the developers. All staff are on the same network and have windows logins.
Lu wants to limit access for project to the team that is responsible for it. No team is allowed access to any project that belongs to another team. Additionally only the team leader and senior developer are allowed to start/stop the projects, although the junior developers can force builds. Everything done in the system must be audited.
The Build Setup
Each application has two projects – main and tools. Main includes the actual web application, plus the web services, while tools contains all the support tools. This makes a total of six projects.
Each project contains everything required to build the binaries and deploy them to the build server (for the other departments). These are triggered on an interval basis, plus every night at 3am.
Since the hand-over process is manual, there is no impact on other environments. However considering the team size and low-level of trust they want everything locked down.
There will be three security zones: low, medium and high. The following table shows these zones:
|Low||Permission to force/abort builds.||All||Junior developers|
|Medium||As above, plus permission to start/stop projects||All||Team leaders and senior developers|
|High||As above, plus access to security information||None||Lu & Peter|
Each application will be divided into low and medium zones. The high level security zone is generic to the system rather than application-specific.
These goals give a total of seven security groups – low and medium for each application plus a high security group. Everybody within the department will belong to one of these seven groups.
This actual model is a little more complex, but I’ve put together a diagram as follows:
Again I’m going to use the same basic configuration for each project:
<intervalTrigger buildCondition=”IfModificationExists” seconds=”300″/>
The project name (i.e. LAS-Main) will be changed for each project – a total of six projects.
Again the first step is turning on security – just added the sessionSecurity element, plus the settings and assertions children.
Adding The Roles and Users
Next, I added the users. Since everyone is on the network I added them with ldapUser elements. This is defined as follows:
<ldapUser name=”lu.jones” domain=”localhost”/> <!– Manager –>
<ldapUser name=”peter.smith” domain=”localhost”/> <!– SysAdmin –>
<ldapUser name=”mark.doulos” domain=”localhost”/> <!– LAS Team leader –>
<ldapUser name=”jill.white” domain=”localhost”/> <!– LAS Senior developer –>
<ldapUser name=”john.asher” domain=”localhost”/> <!– LAS Junior developer –>
<!– Remaining users omitted –>
Note there is a domain in here – I’ve just changed it to localhost for this example. Without the domain the active directory authentication will fail as the authenticator won’t know where to look.
I’ve also added a server-level assertion for the high-security permissions:
<roleAssertion name=”Admin” defaultRight=”Allow”>
This just says that Lu and Peter have full access to everything.
Configuring the Clients
This is exactly the same for scenario one for CCTray. Each individual user will need to go in and configure their security authentication. The only difference is they can use the WinLogin authentication method.
At the moment it is not possible to add the authentication to the dashboard. I’ll look into adding a authentication plug-in and then I’ll update this post.
Locking Down Projects
The final step is to lock down the projects. This is where there is slightly more work than scenario one. Since each project needs to be secured, every project will need to be modified.
There are two ways this can be done. One way would be to add the security to each project, the second is to define roles and then link each project to them. Since there are two projects for each application, with identical permissions I’m going to use the second approach.
In the settings section of sessionSecurity I define each application role. The following shows an example:
<roleAssertion name=”LAS-Developers” forceBuild=”Allow” defaultRight=”Deny”>
<!– Remaining users omitted –>
<roleAssertion name=”LAS-Admin” forceBuild=”Allow” startProject=”Allow” stopProject=”Allow” defaultRight=”Deny”>
This defines the developer and admin (team leader/senior developer) roles for an application. Each of the other applications has a similar definition except with a different list of users.
Then, in each project I add a security section like the following:
<security type=”defaultProjectSecurity” defaultRight=”Deny”>
<roleAssertion name=”LAS-Developers” ref=”LAS-Developers”/>
<roleAssertion name=”LAS-Admin” ref=”LAS-Admin”/>
<roleAssertion name=”Admin” ref=”Admin”/>
The first two assertions will change to match the application each project is for, while the last one is the same for all projects (gives Peter & Lu access to the projects).
Finally Lu wants everything audited. This is very simple to do, just add an audit logger and an audit reader. This is done in the sessionSecurity element by adding the following elements:
By default these will use an audit log called SecurityAudit.xml in the same directory as the executables. This can be changed by added a location attribute as follows:
<auditReader type=”xmlFileAuditReader” location=”C:\Logs\CCNet_Audit.xml”/>
Note that the location must be on both elements and they point to the same location. This is because the two items work independently of each other and it is possible to have multiple audit loggers.
And that’s the security settings for this scenario.
As always, working through this scenario I found a couple of issues.
First, while the LDAP security is generic (i.e. the authentication is handled externally) I still needed to define every user. This is because of the internal validation that happens. As different users would belong to different roles I couldn’t just use a wildcard for the authentication. While this does make things a little more security, it also forces duplication of security Instead it would be nice to define something that keys off an LDAP group, hence reducing the need for duplication.
Secondly there is no dashboard plug-in to detect a Windows Login. I haven’t implemented this as I’m not sure of the best way yet (if anybody has any suggestions let me know!) This wouldn’t be so bad, except the only way to view the audit logs currently is via the dashboard!
Once again I have posted the complete example on my storage site:
There is no dashboard configuration for the reason above.
Feel free to send me any suggestions for improvements or ideas on how to clarify these scenarios or security in general.
A.K.A Why I Love Documentation
I recently put together a small tool to help diagnose errors in the configuration (i.e. ccnet.config). After getting some good feedback it on it, I decided it would be nice to add to the trunk, and even nicer if it ended up in the final installer (since this is what most people will use).
Like a lot of areas in CC.Net, there is no documentation on developing for the installer. But given that installers are reasonably simple, I thought I’d give it a try. And that is where my frustration began!
I should add as a side note, I’m a keen user of WiX. This is a set of open-source tools that take in an XML file and generate an installer out the other end. It is very simple to use (although I do have some problems with it) and has a great price tag – free!
CC.Net uses NSIS instead, which is more of a scripting tool for building installers. It is also free, but I find it a little harder to read and understand (of course I’m probably just used to WiX and find it hard to convert).
Anyway, back on topic, this post is about what I have learnt to get a new project included in the installer.
Where I Started
Now, since I knew that CC.Net uses NSIS I thought all the information for the installer would be in ccnet.nsi (this is the script file for the server). I was wrong
The file has all the instructions for installing the product, and nothing about the files to be included. I knew the names of the files currently included, but nowhere did I find these in the script file.
Thus I was lost!
First Things First: The Build
As we start out with code, not binaries, the first step is to generate the executables. CC.Net already has a pretty good build file that uses NAnt. This does all sorts of wonderful things, including compilation, unit tests, code coverage and most importantly for me – it generates the installer.
Again, it doesn’t have any lists of file, but it has three important pieces of information:
- First, it has what is actually built and what parameters are passed in
- Next it has when the installer is generated, and the build steps that must happen first
- Finally it has a build step for generating a deploy folder
I’m going to shuffle the order of these around a little because I didn’t discover them in this order.
Instead, the first piece I found out was the order of build steps. The build step (or target) that generates all the installer is called “dist”. This depends on the deploy targets for CCTray, server and dashboard (there’s an intermediary target in between called “deploy”).
Now while “dist” is the actual build step that generates the installer, it needs all the files copied to a deploy folder first. This turns out to be critical because the .nsi file refers to this folder and builds the installer using the files in this folder!
Thus to include new files in the installer all I needed to do was modify the build step for generating the deploy folder (see important piece of information #3).
Where Do You Think You’re Coming From?
Looking at the existing executables I saw they came from the build folder, and where slightly re-arranged to be in the correct locations (plus there were some exclusions). However, when the executables for the validator went into another folder – the standard binrelease folder from C#.
Again I searched and searched through the build script to find the step that copies them – but I couldn’t find it anywhere. Which brings me to important piece of information #1 – the parameters that go into the compilation.
It turns out that CC.Net projects have an extra build configuration called “Build”. This is very similar to the standard “Debug” and “Release” builds, but (and this is a big but) it sends the outputs of the build to a folder underneath the global build folder!
Once I had figured out these important key points, everything else fell into place!
A Recipe for New Projects
So, here are the steps for adding a new project so it is included in the installer:
- Add a new build target to the project called “Build”.
- Configure this project so it outputs the binaries to a folder underneath the global build folder.
- Add some lines to the NAnt build script to copy the binaries into the correct location within the deploy folder.
And that’s it! It took me almost three hours to figure out these three steps
Addendum: Adding a Shortcut
It is also very easy to add a shortcut. This is covered in the NSIS documentation, so it didn’t take me long to figure it out.
In the .nsi script file there are a number of “Section” entries. There are three main sections:
- CruiseControl.NET Server (SEC01)
- Web Dashboard (SEC02)
- Examples (SEC04)
Each of these sections map to an option the user can choose. In these sections are the commands for installing the product, including creating short cuts.
So, to add a new shortcut, select the correct section and add a CreateShortCut command with the correct parameters (location and target).
Even simpler than adding files to the installer